First, a few words about the title and a bit of self-disclosure. I have not used any AI products. I have never tried Chat GPT or any similar services. Second, my reasoning to date has been that I need to better understand these products and services and think carefully about their implications before I utilize them. Third, I find that the most effective way for me to think about an issue is to write. Writing helps me clarify what I am thinking and hopefully helps me better evaluate my ideas. Therefore, I can assure you that everything I have written to date, including this piece, I have written myself (for better or worse).
AI and therapy notes: Many of my colleagues (people I respect) are exploring, advocating, and using AI programs to write their therapy notes. Their advocacy primarily centers on how AI saves them time and allows them to better focus on their clinical work. Many physicians and medical groups are also using AI for similar reasons. The number and types of note writing programs appears quite varied: from those that record the entire session to those that will write a note based on de-identified summaries submitted by a therapist. So why have I not adopted one of these programs? First, they hallucinate. Approximately 18 months ago I attended a workshop on AI in which the presenter reviewed 6 or so AI note writing programs and stated that they all hallucinated, specifically, the AI made up dialogue and content (1). When colleagues who have adopted note writing programs discuss this, they acknowledge this and offer the solution of proofreading one’s notes to catch such errors. While theoretically this sounds like a great idea my cynicism leads me to doubt that users of AI note writing programs would consistently do this. Moreover, discussions of newer versions of large language programs have noted that they are more prone to hallucinating (2). The New York Times article detailing this issue specifically states “ The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer.” This is obviously a significant concern. When the content generated by AI involves confidential healthcare information the dangers are obvious. Second, data scrapping. While many of the note writing programs assert that they do not use data from therapy sessions and destroy recordings of sessions I remain concerned. Many of these companies are start-ups and their capitalization and longevity are by no means guaranteed. If they are bought will new owners respect these agreements? Are these agreements as air tight as users of these models argue? What happens if these companies go bankrupt? Do we end up with a 23 and Me situation where genetic data may suddenly be for sale? Third, while there are clearly some therapists, and other healthcare providers, who write notes in a perfunctory manner, treat note writing as a chore to quickly be completed, there are others who use note writing to reflect on their work and think about the direction of their work. Will this be lost with AI note writing? Finally, from a training perspective those professionals who supervise trainees will often review their notes to gauge their trainee’s thinking and help them provide feedback to the trainees. AI notes would eliminate this learning opportunity. While some training programs do not allow AI note writing I wonder if their use can really be limited particularly if the supervising staff are using these notes.
AI Chatbots and Companions: The danger of AI Chatbots, particularly for adolescents, is starting to receive more and more attention. A recent article in the Chicago Tribune (3) focuses on this issue. While sharing anecdotes about different teens who report relying on ChaptGPT for all sorts of advice, the article also cites research that teens increasingly interact with AI as if it were a real companion. The article notes that a study by Common Sense Media (4) details, from survey data, how a growing and large percentage of teens use AI companions, “About one in three teens: Have used AI companions for social interaction and relationships, including role-playing, romantic interactions, emotional support, friendship, or conversation practice.” (4). The authors of the Common Sense paper on the topic are so concerned about the negative implications of AI companions for teens that they recommend against their use for anyone under the age of 18. Concerns about the use of AI companions include issues such as teens failing to learn how to interact with each other, develop real social relationships and social skills. Moreover, as one teen in the article notes, AI companions never lose interest in you, always agree with you, and always think you are interesting. Obviously, this is not a good model for relationships. Relationships are often complex and at times daunting as we have to understand other’s perspectives, be open to feedback and confrontation, and accept that we are not always right and certainly not always interesting. A recent article in the Atlantic magazine detailed data on the decline of romantic relationships among teens (5) while previous work has noted the increased social isolation of young adults, particularly young adult males (6). It is likely that AI companions are being substituted for real relationships. My fear is that this is akin to eating donuts and candy versus healthy food, they fill you up but they are bad for you. These arguments are articulated and expounded on in a recent article in Greater Good Magazine, an online journal (7). The article references the dangers of emotional dependency on AI companions. Proponents of these companions argue that they can offer people real companionship when needed and can be a resource. While I can envision scenarios where this might be true these potential benefits in no way mitigate the risks.
AI, Thinking and Writing: In a recent podcast on AI, on The Grey Area, hosted by Sean Illing, “ If AI can do your classwork, why go to college” Illing and his guest, James Walsh (8) discuss the dangers of AI in the classroom. Walsh is highly critical of AI and concerned that students are losing their ability to write and think creatively by relying on AI. While this grossly simplifies his arguments and the discussion, Walsh voices significant concerns about the impact of AI. My favorite anecdote from the podcast was Walsh’s reflections on a conversation with an older adolescent who asserted that they only used AI to help generate ideas, but did the writing themselves. The irony of the teens’ assertion appeared lost on this young person. Clearly, there are many arguments about addressing AI in the classroom. While a few advocate having students no longer do papers but handwrite tests and short papers in class, most have argued that AI is here to stay and that professors need to teach students how to use AI and
think critically about it. While this argument has some potential, the temptation to rely on AI, for students and professors, likely remains high. One of my clients shared a story of how this client (an excellent college student) struggled to answer an exam question only to later learn from the professor that the professor used AI to generate the question and that the professor acknowledged that it took the professor longer to answer the AI question than the time allotted for the entire exam.
Oops my bias is showing: Clearly I am not a fan of AI. While I will admit to reading the summaries AI offers when I google various things I try to then move to the source material. My concerns about AI are not limited to the items discussed (the amount of
natural resources used by AI programs is a major concern). Moreover, I have likely given the potential benefits of AI short shrift. It clearly appears that AI is here to stay. In fact there are currently efforts underway to develop therapists and the use of ChatGPT and other AI programs for guidance is already occurring. In future blogs I will discuss these issues and try to address the concerns about and potential benefits of AI when it comes to psychotherapy and emotional well-being.
References
- Presentation by Maelissa McCaffrey, examined AI note takers and raised concerns that they all hallucinate: https://personcenteredtech.com/using-artificial-intelligence-ai-as-a-mental-health-clinicia n/
- McCaffrey also has a number of youtube presentations and a website in which she discusses this issue and a variety of other issues.
- Others have raised concerns as well, such as a brief blog by the therapist, Jordan Nodelman: https://jnodelmanlcsw.com/the-risks-of-ai-note-taking-software-in-therapy-a-psych otherapists-perspective/
- There are a myriad of other articles, blogs and videos on the topic. A well balanced article that you might consider is: https://clearhealthcosts.com/blog/2025/03/therapy-notes-by-ai-create-false-narrat ives-therapists-say/
- Recent article in the New York Times discussing the fact that new AI models are more prone to hallucination: https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html 3. “Teens Turning to AI for Friendship” by Jocelyn Gecker, Chicago Tribune, section 2, Friday July 25, 2025.
- Common Sense Media’s work on the use of AI social companions by teens which highlights the dangers of this use: https://www.commonsensemedia.org/research/talk-trust-and-trade-offs-how-and-why-tee ns-use-ai-companions
- The article by Faith Hill, from the March 2025 edition of the Atlantic, titled “Teens Are Forgoing a Classic Rite of Passage” details this decline.
- Derek Thompson’s article in the Atlantic, “The Antisocial Century” https://www.theatlantic.com/magazine/archive/2025/02/american-loneliness-personality politics/681091/
- Also see the discussion of this article between Thompson and Lora Kelly: https://www.theatlantic.com/newsletters/archive/2025/01/americas-crisis-of-alone ness/681251/
- An article in Greater Good Magazine, which is an online magazine published by the by the Greater Good Science Center (GGSC) at the University of California, Berkeley: https://greatergood.berkeley.edu/article/item/can_you_get_emotionally_dependent_on_c hatgpt
- Sean Ilig’s podcast, “Out of the Grey,” https://www.youtube.com/watch?v=0IEc6yoVf-s