I’m talking to who?

AI, and in particular Chatbots, may be a new culprit in the decline in mental health. Conversely, it is possible that they offer another tool to help those of us who are struggling emotionally. The data on the number of teens and adults who spend significant amounts of time on their phones and who are using social media is substantial and has been increasing over the past few years (1,2,3). Moreover, the amount of time that people, particularly teens, are spending interacting with AI is also growing substantially. A recent survey, from December 2025, by the Pew Research Center reported that 64% of teens say they use Chatbots, including about 3 in 10 who report daily use (4). Additional data from this survey supports and reinforces the data on online activity by teens, noting that 97% of US teens report they use the internet daily, including four-in-ten who say they are almost constantly online. (4) Other data also supports a high level of Chatbot use, and details how people turn to Chatbots for mental health concerns and emotional support (5).

While the systematic study of the psychological impact of Chatbots appears to just be beginning, there is very suggestive evidence, as well as strong anecdotal reports, that they can have a troubling impact on mental health, particularly for those who are already struggling with psychological issues/illnesses. A recent study in Science (6) details how sycophancy, the tendency to validate and affirm what is said regardless of its validity or morality/ethicalness, is a feature of AI (specifically large language models and their Chatbots). This study concludes that sycophancy has multiple detrimental effects: reductions in the willingness to take responsibility for one’s actions, a decline in willingness or openness to repairing or fixing interpersonal conflicts, and reinforcement of one’s own beliefs, that one is in the right regardless of the course of action proposed and its potential consequences. Moreover, this study showed that users preferred the sycophantic responses over human responses that were more challenging and critical. While there clearly is a need for more studies there is other preliminary research that supports the idea that sycophantic Chatbots have a potentially negative impact on mental health (7). Specifically, this study(7a) found that Chatbots were prone to stigmatizing certain conditions and more significantly Chatbots responded in ways that might increase the risk of harmful behaviors while another study (7b) suggested that Chatbots respond in ways that are unethical. These are only two of the many studies that have raised concerns that Chatbots can have deleterious effects on mental health (7c).

To be fair we need to recognize that there are arguments for and data supporting the claims that Chatbot use has value, can help ameliorate emotional and psychological problems (5) and that therapy like services provided by Chatbots have positive effects and benefits. Specifically, two studies that reviewed research on the use of mental health focused Chatbots and their impact on mental health concluded that there is evidence to support their value in terms of alleviating symptoms and improving mental health (5). More recent research efforts have found similar results, including a study from 2025 which showed that a therapy focused Chatbot, AI’s Therabot, reduced symptoms of both major depression and generalized anxiety (5d). What is particularly interesting about this study is that compared the benefits of Chatbot focused interventions to being placed on a waiting list for services. Specifically one group was given 4 weeks of Chatbot therapy while the comparison group did not receive any services (note: this is a fairly standard approach to studying treatment effectiveness, comparing a group receiving the intervention to a group who was put on a waiting list for services).

It is clear that additional research is needed to determine, or at least offer guidance, on when Chatbots can be helpful for alleviating emotional concerns and supporting if not facilitating mental health. Moreover this research would need to clarify how Chatbots should be developed and utilized, i.e., what types of guardrails and other supports need to be built into them to support and increase their efficacy and reduce the risks associated with their use. It would also be appropriate to compare the effectiveness of Chatbots to traditional therapies provided by therapists in the community.

However, there are other concerns regarding the use of Chatbots that need to be considered. The sycophancy that is characteristic of chatbots does not mirror or reflect normal human interactions. Interacting with others is not typically characterized by constant agreement and validation. In fact, we often challenge and question each other. While I have resisted the urge to get AI feedback on this blog, when I shared an earlier draft with a trusted colleague they offered a number of critiques and raised questions about some of the points I made and even about my writing. While this was not enjoyable I think it resulted in a better blog. For us to grow and develop we need to be challenged and questioned as well as validated and supported. This is obviously true when it comes to therapy. Good therapy does not involve unending and unquestioning validation. In fact, once a strong working alliance has been built, therapists often need to ask people to question their thinking, revisit their assumptions and question their choices. Everything I have read about Chatbots and Large Language Models suggest that they are not equipped to do this. While I understand the appeal of turning to a Chatbot versus confiding in a friend or even seeking therapy (cost, ease of access, fear of rejection, concerns about being criticized or challenged, stigma, not wanting to burden others) I worry that Chatbots will prove a poor substitute for human interaction.

Other writers and researchers have identified additional risks that may arise from a reliance on Chatbots for advice, guidance, and help with creative tasks. A clever study assessing the use of AI on writing showed that when AI was used to help/facilitate writing that the groups relying more on AI showed less complex neural activity, and less variation in their writing (as well as less creativity). In addition, those who did not rely on AI for writing appeared more engaged with what they wrote, and remembered it more consistently (8). Other concerns, which are beyond the scope of this blog to discuss, include the impact of Chatbots on social engagement and the risks that they may perpetuate the loss of social connections. More complex arguments about the impact of interacting with Chatbots and spending hours online also needs to be considered. Ezra Klein (9) among others have discussed the idea, originally offered by Marshal McCluhan, that the medium (the way in which we receive information) is the message (has a greater impact on us than the actual message).

It is clear that the jury is still out regarding the impact of Chatbots. In the interim, while more studies are conducted and more thought is given to the use of Chatbots, it is essential that we remember the potential risks as well as the possible benefits offered by AI and Chatbots.

References:

  1. Growth of screen time. The American Academy of Pediatrics summary of survey data shows high amounts of screen time for children and teens, and suggests that it is also high for young adults.
    •  https://www.aap.org/en/patientcare/media-and-children/center-of-excellence-on-social-media-and-youthmental-health/qa-portal/qa-portal-library/qa-portal-library-questions/average-amounts-of-screen-time/srsltid=AfmBOopZAwcPlcy2OtqIadvy78UEdSsjSZHikGBnvPio863GiK5VzXhw.
    • In addition other sources of data strongly concur with this report. See for example the CDC report shows a growing use of screen time by teens between 2021 and 2023, with teens showing increased use of screen time: https://stacks.cdc.gov/view/cdc/168509.
  2. Social Media use/time per day. An American Psychological Association report from2024 indicates that teens average 4.8 hours on social media use, daily:
    • https://www.apa.org/monitor/2024/04/teen-social-use-mental-health
  3. Critics of smartphone use and social media on teen mental health include Jonathan Haidt, author of Anxious Minds, and the psychologist Jean Twenge, who has written extensively on the topic.
  4. Pew Research Study that provided data on the use of Chatbots:
    • https://www.pewresearch.org/science/2025/09/17/ai-in-americans-lives-awareness-experiences-and-attitudes/
  5. Research supporting the potential mental health benefits of Chatbots:
    • Survey data from several sources suggest that there is a portion of the population that turns to Chatbots for social support and mental health concerns: see recent data compiled by the American Psychological Association, which shows not only increased use of Chatbots but suggests that much of the use if for emotional support and mental health related issues: https://www.apa.org/monitor/2026/01-02/trends-digital-ai-relationships-emotional-connection.
    • A 2024 review of available research on Chatbots suggested potential benefits from their use (note this study relied on earlier studies; as a result the Chatbots utilized were more rudimentary that those that exist currently: https://pubmed.ncbi.nlm.nih.gov/38934788/
    • A second review, also from 2024, looked at a slightly large pool of studies and also concluded that Chatbots showed potential benefits in terms of improving mental health: https://www.mdpi.com/2076-3417/14/13/5889
    • A study showing the benefits of a therapy Chatbot in reducing symptoms of major depression and anxiety: https://ai.nejm.org/browse/ai-article-type/original-article
    • Research showing deleterious impact of therapy Chatbots
  6. Science Article on AI and Sycophancy: https://pmc.ncbi.nlm.nih.gov/articles/PMC10291862/
    • Preliminary research suggesting that AI and more specifically Chatbots, can have a negative impact on mental health include the following:
      • https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care. This study found Chatbots stigmatized certain disorders and offered problematic advice when presented with scenarios that a therapist might encounter.
      • This research raised concerns that Chatbots committed multiple ethical violations including mishandling crisis situations, reinforcing negative beliefs and creating a false sense of empathy. https://www.brown.edu/news/2025-10-21/ai-mental-health-ethics
      • Additional articles and studies have also raised concerns about the impact of Chatbots on mental health. See the following examples:
        • https://pmc.ncbi.nlm.nih.gov/articles/PMC12967755/
        • https://psychiatryonline.org/doi/10.1176/appi.pn.2025.10.10.5
        • https://www.newporthealthcare.com/resources/industry-articles/ai-chatbots-teen-mental-health/
  7. Research comparing writing samples and related brain activity for those using and not using AI to facilitate a writing task; see the article “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Tasks.
  8. See Ezra Klein’s op-ed, from the March 26th, 2026, New York Times, titled: I Saw Something New in San Francisco. https://www.nytimes.com/2026/03/29/opinion/ai-claude-chatgpt-gemini-mcluhan.html
keyboard_arrow_up