Critical AI

STUDENT INSIGHTS: THE DARK SIDE OF CHATBOT THERAPY

By: Rishik Sarkar

Illustration by Rishik Sarkar

[The research, writing, and editing of this post was part of an undergraduate project undertaken for a Rutgers Honors College Seminar in Spring 2023, “Fictions of Artificial Intelligence.” The author’s bio follows the post.] 

In this age of ChatGPT and Stable Diffusion, “AI” is everywhere. The technology’s steep rise in popularity in recent years triggered a hype train of consumers and companies attempting to harness its power in various ways, for example, with several innovative use cases like autonomous vehicles and personal assistants. Although many applications of data-driven machine learning—such as robotic guide dogs—could benefit humanity, some overreach: one of the main culprits being chatbots in mental healthcare. I believe that current large language models (LLMs) do not belong in the sensitive field of psychiatry, where they end up causing more harm than good.

But first, let us get a crucial question out of the way: do we even need therapy chatbots? If we were only to consider its potential without discussing ethics, the answer would likely be a yes. In their paper, “Mental Health Chatbot for Young Adults With Depressive Symptoms During the COVID-19 Pandemic: Single-Blind, Three-Arm Randomized Controlled Trial,” Yuhao He et al. state that despite China having reported 95 million victims of depression, “the use of health services for depressive disorders … has been rather limited, with the access rate of adequate treatment being less than 0.5%” (He et al., 2022, para. 10). As a solution, the authors suggest that a “chatbot is a feasible and engaging digital therapeutic approach that allows easy accessibility and self-guided mental health assistance for young adults with depressive symptoms” (He et al., 2022, para. 5). However, I argue that chatbots do not (yet) possess the attributes required to become effective mental healthcare providers, because of glaring issues concerning trust, empathy, and security.

Ariel Davis for NPR

Trust is essential when it comes to a topic as sensitive as mental health treatment, and the problem of AI “hallucination” makes it very difficult to ascertain the trustworthiness of the information generated by chatbots; if you prompt an LLM regarding a topic, it will often confidently make up information that may sound plausible but be entirely wrong. Abeba Birhane and Deborah Raji illustrate this issue in “ChatGPT, Galactica, and the Progress Trap.” They write, “Galactica and ChatGPT have generated, for example, a “scientific paper” on the benefits of eating crushed glass (Galactica) and a text on “how crushed porcelain added to breast milk can support the infant digestive system” (ChatGPT)” (Birhane & Raji, 2022, para. 4). Although I am admittedly guilty of experimenting with ChatGPT hallucinations to generate hilariously nonsensical directions to fictional restaurants, I recognize that the consequences of this unreliability can be extremely severe in situations involving mental health.

Now assume that we magically resolve this issue and drastically improve the accuracy of chatbots. Does their increase in trustworthiness allow them to become viable replacements for therapists? It’s not that simple. Although these hypothetically advanced ML models might be able to present relevant diagnoses and treatment methods for an individual patient by using past data, they are inherently incapable of empathy–which seems intuitively necessary for treating people with mental health problems. Frank Pasquale states in New Laws of Robotics that, although a human’s use of language is very limited in capacity compared to the millions of correlations used by chatbots to make predictions, its shared nature allows for comprehension and debate; he further states that “we should be wary of entrusting AI with the evaluation of humans” until its decision-making processes are “similarly accessible” and can be “challenged” (Pasquale, 2021, p. 216).

Because the outputs of chatbots resemble human speech, it can be easy to mistake their algorithmic conversational ability for sentience—and their hard-coded engagement for solicitude. Pasquale worries that if humans over-rely on synthetic compassion, they may  “lose the ability to know and value” the “beautiful risk of real personal interaction” (Pasquale, 2021, p. 217). That kind of interaction can only flourish in tandem with someone who can genuinely empathize over the shared experience of being human–not a language model that mathematically replicates psychological concepts and conversation patterns–but a real person who truly cares. A real person (including even a paid therapist) has the capacity to spend their time doing something else but chooses to stay and listen instead. I believe this perspective shines a light on a significant dent in chatbot therapy: how is a mechanical technology that is notoriously hard to understand and unable to reproduce the core premise of human empathy able to replace human therapists in helping people who require mental healthcare?

Another major issue with chatbot therapists is the deliberate lack of security and privacy in the systems, which can pose a critical threat when used alongside potentially confidential and sensitive data of mental health patients. In “Crisis Text Line: A Case Study on Data Ethics, Privacy, and Technopolitics,” Rutgers undergraduate Nidhi Salian highlights the sensitivity of personal medical data by stating that the exposed data can be “a lot different than someone just understanding your cholesterol” and pointing out, “how disclosures about a person’s HIV status in the 1980s, or involvement with Planned Parenthood today, could put a person at risk” (Salian, 2022, para. 12). Furthermore, in “Artificial Intelligence and Mobile Apps for Mental Healthcare: A Social Informatics Perspective,” Alyson Gamble observes: “Chatbots gather healthcare information about a user… typically with the individual’s expectation of privacy. Some [apps] track location data, permit audio recording, and may be linked to financial information” (Gamble, 2020, p. 103).

While anyone may feel threatened by the possibility of their highly personal information becoming public, such exposure could be especially harmful to mental health patients who may already be highly vulnerable. That is why the U.S. government has put laws such as the HIPAA privacy rule in place to protect the health information of its citizens. However, Gamble (2020) states that such laws do not apply to AI chatbots since they are not medically licensed therapy providers.

Salian (2022) gives a prime example of an unethical scenario initiated by a not-for-profit mental health support service called Crisis Text Line (CTL) that formed a “data-sharing agreement” with Loris.ai–a for-profit ML startup–that planned to train models on the sensitive data collected by the service. CTL had previously informed users via a FAQ that it would “NEVER share data for commercial use, with individuals not associated with a university or research institution, or ‘just because,’” but despite this declaration and the expectations of CTL’s vulnerable users, a whistleblower later revealed the service “removed any language about prohibitions on commercial use of data from its website” (Salian, 2022, para. 5). Such a blatant disregard for privacy not only victimizes people in need of healthcare but can cause them to lose trust in the healthcare system and become even less likely to seek help. To avoid an Orwellian dystopia in which individual privacy is nonexistent and where access to data equals power, the government needs to establish technology laws that prevent the commercial trading of sensitive information.

Now, here’s a conundrum. I, myself, am a Computer Science student interested in researching and developing data-driven machine-learning models. So why do I sound so pessimistic? I understand the great potential of AI in benefiting society, potentially even in mental healthcare. I firmly believe there will be a time when we can effectively use chatbots as supplemental therapy resources. However, chatbots simply have too many shortcomings at this time involving issues such as trust, accuracy, lack of empathy, and security to be viable alternatives for people who need mental health support. We must still make several advancements in the realm of LLMs and cybersecurity before chatbots could truly be used effectively in psychiatry.

Rishik Sarkar is a rising senior at Rutgers University–New Brunswick studying computer science and cognitive science. He is passionate about exploring the application of machine learning and data analytics in healthcare, particularly in treating mental disorders. He aspires to pursue a Ph.D. in computer science to become an applied ML scientist.

Exit mobile version