Content Warning: Mental Health, Self-Harm, Death, Suicide
The general public often perceives AI as a more rapid and efficient alternative to human interaction. Therapy is a prime example of this trend. Indeed, the perceived convenience of generative AI has successfully tapped into a previously underserved niche within the therapeutic sector.
Some would otherwise be apprehensive about seeking therapy or financially unable to do so. For this group, AI therapy may provide greater comfort and access. However, that only applies if the therapeutic services that AI can provide are actually helpful to them. Unfortunately, there has been much evidence to the contrary. AI therapy’s potentially naive and negligent responses often misguide its users in taking actions that potentially cause more harm than help.
Growing Popularity of AI Therapy
Part of the danger AI therapy brings comes from the sheer scale of its user base. As more people see AI therapy as a reliable option, it only increases the likelihood of the possible dangers becoming real.
From recent studies, researchers estimate that roughly 10% of Americans use a form of generative AI almost every day. Of that 10%, over 85% use generative AI as some sort of resource for emotional advice or therapeutic support. This staggering majority of daily users communicates just how widespread the technology’s adoption already is. This percentage may persist as the daily number of users grows. With that, the overall rate of people in the US using AI for therapy daily will follow suit. Like any dangerous trend, the wider it spreads, the more normalized the practice will appear to outsiders. This pattern would form a false sense of legitimacy around the practice.
Human Therapists Using AI
While the general population’s growing adoption is concerning, so is that of actual therapists. An estimated 29% of practitioners said that they use AI of some kind in their practice at least monthly. On top of that, another 56% of practitioners have claimed to use AI in their work field at least once. Having the therapist as a middleman between the AI and patients is a good preventative measure. Still, therapists are just as human and just as capable of making a mistake. Let’s say a therapist is using AI for research involving a patient’s (potential) condition. If the AI hallucinates and presents an incorrect solution, the therapist would need to do further research. The therapist would need to confirm the solution’s legitimacy without using AI, rendering the initial use pointless. That’s only if the therapist suspects the AI to be wrong.
If a therapist doesn’t suspect a problem with the response, they may put their advice into practice. This could lead to unpredictable and disastrous results. It even has the potential to worsen whatever condition the therapist intended to address. If this does not occur, the patient could still lose trust in the therapist or therapy in general. The patient would become less likely to seek help. In fact, it’s likely that they become much more distrustful of any future help.
AI Therapy‘s Ignorance
How AI handles therapy exposes one of the technology’s blind spots. AI has a lack of nuance. An AI doesn’t have personal life experiences or memories of previous interactions, so it wouldn’t remember past appointments with you. Building a trusting and authentic relationship is not possible, as it lacks a life and is only an observer. Its knowledge depends entirely on information provided to it, without direct understanding or wisdom.
Researchers have shown that many of the most popular AI models hold harmful stigmatization of particular mental health conditions. This is a symptom of the blind trust that AI puts in its data. AIs tended to show stigma towards stereotypically “violent” conditions. This can lead to a sense of distrust and discomfort in patients suffering from such conditions. The patient may feel like the therapist doesn’t fully understand them. They may feel objectified: known more by the condition they have than their actual life experience.
By its artificial nature, an AI chatbot isn’t capable of expressing empathy or compassion. This leaves the AI unable to dig deeper into the more emotional side of a patient’s conditions and situations. Neither is AI able to pick up on things like body language and subtle verbal cues as real therapists can.
AI is also unable to ask important follow-up questions. These questions are important for creating a clearer and more satisfying patient experience. These questions help to ascertain a more accurate view of the patient and their experiences. Higher accuracy helps to create a faster and more efficient plan for both the patient and the therapist.
AI Therapy: A Digital Yes-man
AI has a limited ability to comprehend hidden meanings and subtext in language. It lacks the capacity to understand potentially suicidal intentions behind certain questions and statements. Users may give the AI two or more seemingly unrelated statements or questions. However, the concerning possible intentions would be clear to any real person reading them in combination. Real people have an understanding of psychology, society, and human nature. At this stage, AI is simply unable to replicate this. Any concerning or ominous connotations fly right over its “head,” so to speak.
With AI’s lack of nuance, its ignorance can turn into negligence. Over time, AI may become overly confident in what it does not know, leading it to give answers that could harm users without realizing the risks. These responses can cause equal or greater harm to those around them.
One of the main problems with AI is its tendency to align with its user’s perspective. This is true even when that perspective is logically flawed or biased, making it especially concerning in the realm of emotional support or aid. It spotlights AI’s inability to truly correct or challenge a patient, which is especially dangerous in the case of a patient with a distorted view of reality. The AIs encourage patients with warped or even fantastical views of reality to continue believing in them. There have been multiple cases of suicide involving AI users. These cases occurred after users asked the AI if they should commit suicide, and the AI encouraged it. AI encouraging such dangerous behavior and mindsets clearly shows that it has no right to be a therapist.
Better Use for AI in Therapy
It is painfully obvious that AI has caused damage to mental health stability. The most straightforward answer is not to use AI in, or for, therapy at all. However, this answer is really not that helpful, and doesn’t convince those who find chatbots more affordable or accessible to use.
If we can’t stop everyone from using AI for this purpose, how can we at least improve the AI’s responses? Those who program and manage the AI platforms should be more thoughtful of how it responds to its users. How AI interacts and engages with the users is also important to track.
How would that look in this case? Actual mental health experts should help guide those programming and managing the AI. These experts should guide the programmers on spotting patterns that suggest distress or delusion. The AI should also be able to detect if the user appears to be suffering from some kind of delusion or warped reality. It should also be able to carefully and calmly challenge these delusions in a way that won’t anger or confuse the user. The AI should also suggest appropriate hotlines and services when it detects the user potentially needing one.
The best suggestion for those seeking mental help is to never solely rely on AI. People should use AI only for either researching resources or as a way to get their feelings off their chest. However, they should only do these in combination with further mental health aid from other people. Family or friends are a much better source of comfort, and human therapists are infinitely more trustworthy. For those seeking mental health aid, here are some websites that provide you with a catalog of therapists and groups to contact:
Using AI in therapy, much like its use in any area, is a highly difficult balancing act. The technology’s potential to help those who would otherwise be unable to receive mental health aid is astonishing. It brings a hopeful view of a potentially more mentally healthy future. However, great care must be taken by those managing the AI to ensure that the therapeutic services don’t create more problems than solutions. The patient should be given the same care and thought given to them by a real therapist. If the AI can’t accomplish this, then it may never be able to truly replace a human therapist’s healing capacity.
Check Out:
The Impact of GenAI Data Centers on American Communities, and What Can Be Done About It











Be First to Comment