The peer-reviewed paper, published August 20 in Communications Psychology, reports eight studies involving hundreds of participants. Researchers analyzed lab tasks, real Reddit threads, and responses from ChatGPT to map how advice is framed across contexts. Across settings, suggestions to add activities far outnumbered recommendations to remove harmful behaviors. Participants rated additions as easier and more beneficial, and they viewed cutting harmful habits as easier to advise for close friends than for themselves. The team found ChatGPT mirrored the same additive pattern and recommended design changes to elicit subtractive options in AI-delivered support.