The risky, but not useless, pursuit of AI therapy bots.
Chatbot therapy is risky. Itâs also not useless. I didnât find a therapist when I first felt I might need one, nor when I finally found the energy to start Googling the therapists with offices near me. I didnât find one months later when, after glancing at the results of my depression screening, my physician delayed her next appointment, pulled up a list of therapists, and helped me send emails to each of them asking if they were taking on new patients. It was a year before my therapist search ended thanks to a friend who was moving away who gave me the name of the person that had been treating her. I was fortunate: My full-time job included health insurance. I lived in an area with many mental health professionals, and I had the means to consider therapists who were out of network. Many people trying to get mental health care do so without any of the institutional, social, or financial resources I had. This lack of access, fueled by a nationwide mental health crisis and a shortage of therapists in the US â not to mention a health care system that can, for many, [make it extremely difficult to find an in-network provider]( â is a problem that urgently needs solutions. As with any such problem, there are people out there who say the solution is technology. Enter AI. As Generative AI chatbots have rolled out to a wider range of users, some have started using readily available, multipurpose tools like ChatGPT as therapists. Vice [spoke to some of these users]( earlier this year, noting that anecdotal reports of people praising their experiences with chatbots had spread through social media. One Redditor even wrote [a guide to âjailbreakingâ]( ChatGPT in order to get around the chatbotâs guardrails against providing mental health advice. But ChatGPT is not built to be anyoneâs therapist. Itâs not bound by the privacy or accountability requirements that guide the practice and ethics of human therapists. While there are consequences when a chatbot, say, fabricates a source for a research paper, those consequences are not nearly as serious as the potential harm caused by a chatbot providing dangerous or inaccurate medical advice to someone with a serious mental health condition. This doesnât necessarily mean that AI is useless as a mental health resource. [Betsy Stade](, a psychologist and postdoctoral researcher at the Stanford Institute for Human-Centered AI, says that any analysis of AI and therapy should be framed around the same metric used in psychology to evaluate a treatment: Does it improve patient outcomes? Stade, who is the lead author of a [working paper on the responsible incorporation of generative AI into mental health care](, is optimistic AI can help patients and therapists receive and provide better care with better outcomes. But itâs not as simple as firing up ChatGPT. If you have questions about where AI therapy stands now â or what it even is â weâve got a few answers. What is an AI therapist? The term âAI therapistâ has been used to refer to a couple different things. First, there are dedicated applications that are designed specifically to assist in mental health care, some of which are available to the public and some not. And then there are AI chatbots pitching themselves as something akin to therapy. These apps existed long before tools like ChatGPT. Woebot, for example, is a service launched in 2017 designed to provide assistance based on [cognitive behavioral therapy](; it gained popularity [during the pandemic]( as a mental health aid that was [easier and cheaper to access]( than therapy. More recently, there has been a proliferation of free or cheaper-than-therapy chatbots that can provide uncannily conversational interactions, thanks to large language models like the one that underpins ChatGPT. Some have turned to this new generation of AI-powered tools for mental health support, a task they were not designed to perform. Others have done it unwittingly. Last January, the co-founder of the mental health platform KoKo [announced]( that it had provided AI-created responses to thousands of users who thought they were speaking to a real human being. Itâs worth noting that the conversation around chatbots and therapy is happening alongside research into roles that AI might play in mental health care outside of mimicking a therapy session. For instance, AI tools could help human therapists do things like organize their notes and ensure that standards for proven treatments are upheld, something that has a track record of improving patient outcomes. Why do people like chatbots for therapy, even if they werenât designed for it? There are a few hypotheses about why so many people seeking therapy respond to AI-powered chatbots. Maybe they find emotional or social support from these bots. But the level of support probably differs person to person, and is certainly influenced by their mental health needs and their expectations of what therapy is â as well as what an app might be able to provide for them. Therapy means a lot of different things to different people, and people come to therapists for a lot of different reasons, says Lara Honos-Webb, a clinical psychologist who specializes in ADHD and the co-founder of a startup aimed at helping those managing the condition. Those who have found ChatGPT useful, she said, might be approaching these tools at the level of âproblem, solution.â Tools like this might seem like theyâre pretty good at reframing thoughts or providing âbehavioral activation,â such as a list of healthy activities to try. Stade added that, from a research perspective, experts donât really know what it is that people feel is working for them in this case. âBeyond super subjective, qualitative reports of what a few people are doing, and then some people posting on Reddit about their experiences, we actually donât have a good accounting of whatâs happening out there,â she said. So what are the risks of chatbot therapy? There are some obvious concerns here: Privacy is a big one. That includes the handling of the training data used to make generative AI tools better at mimicking therapy as well as the privacy of the users who end up disclosing sensitive medical information to a chatbot while seeking help. There are also the [biases built into many of these systems]( as they stand today, which often reflect and reinforce the larger systemic inequalities that already exist in society. But the biggest risk of chatbot therapy â whether itâs poorly conceived or provided by software that was not designed for mental health â is that it could hurt people by not providing good support and care. Therapy is more than a chat transcript and a set of suggestions. Honos-Webb, who uses generative AI tools like ChatGPT to organize her thoughts while writing articles on ADHD but not for her practice as a therapist, noted that therapists pick up on a lot of cues and nuances that AI is not prepared to catch. Stade, in her working paper, notes that while large language models have a âpromisingâ capacity to conduct some of the skills needed for psychotherapy, thereâs a difference between âsimulating therapy skillsâ and âimplementing them effectively.â She noted specific concerns around how these systems might handle complex cases, including those involving suicidal thoughts, substance abuse, or specific life events. Honos-Webb gave the example of an older woman who recently developed an eating disorder. One level of treatment might focus specifically on that behavior: If someone isnât eating, what might help them eat? But a good therapist will pick up on more of that. Over time, that therapist and patient might make the connection between recent life events: Maybe the patientâs husband recently retired. Sheâs angry because suddenly heâs home all the time, taking up her space. âSo much of therapy is being responsive to emerging context, what youâre seeing, what youâre noticing,â Honos-Webb explained. And the effectiveness of that work is directly tied to the developing relationship between therapist and patient. But can AI help solve the crisis of access to mental health care? Implemented ethically, AI could become a valuable tool for helping people improve their results when seeking mental health care. But Stade noted that the reasons behind this crisis are wider-reaching than the realm of technology and would require a solution that is not simply a new app. When I asked Stede about AIâs role in solving the access crisis in US mental health care, she said: âI believe we need universal health care. Thereâs so much outside the AI space that needs to happen.â âThat said,â she added, âI do think that these tools have some exciting opportunities to expand and fill gaps.â [A Starbucks coffee shop in New York seen from outside. Thereâs a logo on the window and a crowd of people sitting inside.]( Nicolas Economou/NurPhoto via Getty Images [Starbucks has lost $11 billion market value, and not because of boycotts]( [Starbucksâs messy December, explained.]( [Biden sits at a desk and signs an executive order while Harris stands next to him, by a sign that says âArtificial Intelligence: Safety, Security, and Trust.â]( Al Drago/Bloomberg via Getty Images [Weâre still in a fight for survival when it comes to AI safety]( [President Bidenâs executive order on artificial intelligence was criticized by many for overreaching, but the danger from uncontrolled AI progress is real.]( [Several red and white YouTube âplayâ button logos.]( NurPhoto via Getty Images [Plagiarism doesnât need AI to thrive online]( [A YouTuberâs deep dive on plagiarism tries to make viewers care when creators steal content.](
Â
[Learn more about RevenueStripe...]( [A black-and-white photo of IBMâs quantum computer, which resembles several stacks of processors arranged in a slight semicircle, all with a shiny metal exterior.]( IBM Research [Qubit by qubit, the quantum computers of tomorrow are coming into being]( [The quantum computing industry has a road map to the future â but can it reach its destination?]( [A photo of an iPhone showing the iOS 17 logo.]( Jaap Arriens/NurPhoto via Getty Images [Local police should not be your go-to source for iPhone safety news]( [A warning about the NameDrop feature on iOS 17 is just the latest in a long history of misleading Facebook posts from law enforcement.]( Support our work Vox Technology is free for all, thanks in part to financial support from our readers. Will you join them by making a gift today? [Give]( [Listen To This] [Listen to This]( [Long live your dog]( A drug that aims to increase life expectancy for dogs is getting closer to market. But pet ethicists arenât sure itâs great news for manâs best friend. [Listen to Apple Podcasts]( [This is cool] [There's a "Creep" cover for every mood](
Â
[Learn more about RevenueStripe...]( [Facebook]( [Twitter]( [YouTube]( This email was sent to {EMAIL}. Manage yourâ¯[email preferences]( , orâ¯[unsubscribe](param=tech) â¯to stop receiving emails from Vox Media. View our [Privacy Notice]( and our [Terms of Service](. Vox Media, 1201 Connecticut Ave. NW, Washington, DC 20036. Copyright © 2023. All rights reserved.