Newsletter Subject

The chatbot will see you now

From

vox.com

Email Address

newsletter@vox.com

Sent On

Wed, Dec 13, 2023 10:44 PM

Email Preheader Text

The risky, but not useless, pursuit of AI therapy bots. Chatbot therapy is risky. It’s also not

The risky, but not useless, pursuit of AI therapy bots. Chatbot therapy is risky. It’s also not useless. I didn’t find a therapist when I first felt I might need one, nor when I finally found the energy to start Googling the therapists with offices near me. I didn’t find one months later when, after glancing at the results of my depression screening, my physician delayed her next appointment, pulled up a list of therapists, and helped me send emails to each of them asking if they were taking on new patients. It was a year before my therapist search ended thanks to a friend who was moving away who gave me the name of the person that had been treating her. I was fortunate: My full-time job included health insurance. I lived in an area with many mental health professionals, and I had the means to consider therapists who were out of network. Many people trying to get mental health care do so without any of the institutional, social, or financial resources I had. This lack of access, fueled by a nationwide mental health crisis and a shortage of therapists in the US — not to mention a health care system that can, for many, [make it extremely difficult to find an in-network provider]( — is a problem that urgently needs solutions. As with any such problem, there are people out there who say the solution is technology. Enter AI. As Generative AI chatbots have rolled out to a wider range of users, some have started using readily available, multipurpose tools like ChatGPT as therapists. Vice [spoke to some of these users]( earlier this year, noting that anecdotal reports of people praising their experiences with chatbots had spread through social media. One Redditor even wrote [a guide to “jailbreaking”]( ChatGPT in order to get around the chatbot’s guardrails against providing mental health advice. But ChatGPT is not built to be anyone’s therapist. It’s not bound by the privacy or accountability requirements that guide the practice and ethics of human therapists. While there are consequences when a chatbot, say, fabricates a source for a research paper, those consequences are not nearly as serious as the potential harm caused by a chatbot providing dangerous or inaccurate medical advice to someone with a serious mental health condition. This doesn’t necessarily mean that AI is useless as a mental health resource. [Betsy Stade](, a psychologist and postdoctoral researcher at the Stanford Institute for Human-Centered AI, says that any analysis of AI and therapy should be framed around the same metric used in psychology to evaluate a treatment: Does it improve patient outcomes? Stade, who is the lead author of a [working paper on the responsible incorporation of generative AI into mental health care](, is optimistic AI can help patients and therapists receive and provide better care with better outcomes. But it’s not as simple as firing up ChatGPT. If you have questions about where AI therapy stands now — or what it even is — we’ve got a few answers. What is an AI therapist? The term “AI therapist” has been used to refer to a couple different things. First, there are dedicated applications that are designed specifically to assist in mental health care, some of which are available to the public and some not. And then there are AI chatbots pitching themselves as something akin to therapy. These apps existed long before tools like ChatGPT. Woebot, for example, is a service launched in 2017 designed to provide assistance based on [cognitive behavioral therapy](; it gained popularity [during the pandemic]( as a mental health aid that was [easier and cheaper to access]( than therapy. More recently, there has been a proliferation of free or cheaper-than-therapy chatbots that can provide uncannily conversational interactions, thanks to large language models like the one that underpins ChatGPT. Some have turned to this new generation of AI-powered tools for mental health support, a task they were not designed to perform. Others have done it unwittingly. Last January, the co-founder of the mental health platform KoKo [announced]( that it had provided AI-created responses to thousands of users who thought they were speaking to a real human being. It’s worth noting that the conversation around chatbots and therapy is happening alongside research into roles that AI might play in mental health care outside of mimicking a therapy session. For instance, AI tools could help human therapists do things like organize their notes and ensure that standards for proven treatments are upheld, something that has a track record of improving patient outcomes. Why do people like chatbots for therapy, even if they weren’t designed for it? There are a few hypotheses about why so many people seeking therapy respond to AI-powered chatbots. Maybe they find emotional or social support from these bots. But the level of support probably differs person to person, and is certainly influenced by their mental health needs and their expectations of what therapy is — as well as what an app might be able to provide for them. Therapy means a lot of different things to different people, and people come to therapists for a lot of different reasons, says Lara Honos-Webb, a clinical psychologist who specializes in ADHD and the co-founder of a startup aimed at helping those managing the condition. Those who have found ChatGPT useful, she said, might be approaching these tools at the level of “problem, solution.” Tools like this might seem like they’re pretty good at reframing thoughts or providing “behavioral activation,” such as a list of healthy activities to try. Stade added that, from a research perspective, experts don’t really know what it is that people feel is working for them in this case. “Beyond super subjective, qualitative reports of what a few people are doing, and then some people posting on Reddit about their experiences, we actually don’t have a good accounting of what’s happening out there,” she said. So what are the risks of chatbot therapy? There are some obvious concerns here: Privacy is a big one. That includes the handling of the training data used to make generative AI tools better at mimicking therapy as well as the privacy of the users who end up disclosing sensitive medical information to a chatbot while seeking help. There are also the [biases built into many of these systems]( as they stand today, which often reflect and reinforce the larger systemic inequalities that already exist in society. But the biggest risk of chatbot therapy — whether it’s poorly conceived or provided by software that was not designed for mental health — is that it could hurt people by not providing good support and care. Therapy is more than a chat transcript and a set of suggestions. Honos-Webb, who uses generative AI tools like ChatGPT to organize her thoughts while writing articles on ADHD but not for her practice as a therapist, noted that therapists pick up on a lot of cues and nuances that AI is not prepared to catch. Stade, in her working paper, notes that while large language models have a “promising” capacity to conduct some of the skills needed for psychotherapy, there’s a difference between “simulating therapy skills” and “implementing them effectively.” She noted specific concerns around how these systems might handle complex cases, including those involving suicidal thoughts, substance abuse, or specific life events. Honos-Webb gave the example of an older woman who recently developed an eating disorder. One level of treatment might focus specifically on that behavior: If someone isn’t eating, what might help them eat? But a good therapist will pick up on more of that. Over time, that therapist and patient might make the connection between recent life events: Maybe the patient’s husband recently retired. She’s angry because suddenly he’s home all the time, taking up her space. “So much of therapy is being responsive to emerging context, what you’re seeing, what you’re noticing,” Honos-Webb explained. And the effectiveness of that work is directly tied to the developing relationship between therapist and patient. But can AI help solve the crisis of access to mental health care? Implemented ethically, AI could become a valuable tool for helping people improve their results when seeking mental health care. But Stade noted that the reasons behind this crisis are wider-reaching than the realm of technology and would require a solution that is not simply a new app. When I asked Stede about AI’s role in solving the access crisis in US mental health care, she said: “I believe we need universal health care. There’s so much outside the AI space that needs to happen.” “That said,” she added, “I do think that these tools have some exciting opportunities to expand and fill gaps.” [A Starbucks coffee shop in New York seen from outside. There’s a logo on the window and a crowd of people sitting inside.]( Nicolas Economou/NurPhoto via Getty Images [Starbucks has lost $11 billion market value, and not because of boycotts]( [Starbucks’s messy December, explained.](   [Biden sits at a desk and signs an executive order while Harris stands next to him, by a sign that says “Artificial Intelligence: Safety, Security, and Trust.”]( Al Drago/Bloomberg via Getty Images [We’re still in a fight for survival when it comes to AI safety]( [President Biden’s executive order on artificial intelligence was criticized by many for overreaching, but the danger from uncontrolled AI progress is real.](   [Several red and white YouTube “play” button logos.]( NurPhoto via Getty Images [Plagiarism doesn’t need AI to thrive online]( [A YouTuber’s deep dive on plagiarism tries to make viewers care when creators steal content.](    [Learn more about RevenueStripe...](   [A black-and-white photo of IBM’s quantum computer, which resembles several stacks of processors arranged in a slight semicircle, all with a shiny metal exterior.]( IBM Research [Qubit by qubit, the quantum computers of tomorrow are coming into being]( [The quantum computing industry has a road map to the future — but can it reach its destination?](   [A photo of an iPhone showing the iOS 17 logo.]( Jaap Arriens/NurPhoto via Getty Images [Local police should not be your go-to source for iPhone safety news]( [A warning about the NameDrop feature on iOS 17 is just the latest in a long history of misleading Facebook posts from law enforcement.](   Support our work Vox Technology is free for all, thanks in part to financial support from our readers. Will you join them by making a gift today? [Give](   [Listen To This] [Listen to This]( [Long live your dog]( A drug that aims to increase life expectancy for dogs is getting closer to market. But pet ethicists aren’t sure it’s great news for man’s best friend. [Listen to Apple Podcasts](   [This is cool] [There's a "Creep" cover for every mood](  [Learn more about RevenueStripe...](   [Facebook]( [Twitter]( [YouTube]( This email was sent to {EMAIL}. Manage your [email preferences]( , or [unsubscribe](param=tech)  to stop receiving emails from Vox Media. View our [Privacy Notice]( and our [Terms of Service](. Vox Media, 1201 Connecticut Ave. NW, Washington, DC 20036. Copyright © 2023. All rights reserved.

EDM Keywords (207)

youtuber year working work without window well warning users useless used us turned treatment treating tools tomorrow time thousands thoughts thought think therapy therapists therapist thanks terms technology task taking systems survival sure suddenly still standards spread specializes speaking space source someone solving solution software society simply simple signs sign shortage set serious sent seeing see say said rolled roles role risky risks revenuestripe results responsive reinforce refer reddit recently realm readers reach questions qubit public psychotherapy psychology psychologist provided provide proliferation problem privacy prepared practice pick photo person people patient part pandemic overreaching outside organize order one nuances notes needs nearly name much mimicking mention means market many managing manage man making lot logo lived listen list level latest lack join includes implementing ibm hypotheses home helping helped happening happen handling guide guardrails got go glancing gave future friend free fortunate firing find fight experiences expectations expand example even evaluate ethics ensure energy end email effectiveness effectively eating eat easier drug done dogs dog difference destination desk designed danger cues crowd criticized crisis cool consequences connection conduct condition coming comes cheaper chatgpt chatbots chatbot built bound bots black believe behavior available assist asking area approaching anyone answers angry analysis also aims ai adhd added actually access able

Marketing emails from vox.com

View More
Sent On

25/05/2024

Sent On

24/05/2024

Sent On

24/05/2024

Sent On

24/05/2024

Sent On

23/05/2024

Sent On

22/05/2024

Email Content Statistics

Subscribe Now

Subject Line Length

Data shows that subject lines with 6 to 10 words generated 21 percent higher open rate.

Subscribe Now

Average in this category

Subscribe Now

Number of Words

The more words in the content, the more time the user will need to spend reading. Get straight to the point with catchy short phrases and interesting photos and graphics.

Subscribe Now

Average in this category

Subscribe Now

Number of Images

More images or large images might cause the email to load slower. Aim for a balance of words and images.

Subscribe Now

Average in this category

Subscribe Now

Time to Read

Longer reading time requires more attention and patience from users. Aim for short phrases and catchy keywords.

Subscribe Now

Average in this category

Subscribe Now

Predicted open rate

Subscribe Now

Spam Score

Spam score is determined by a large number of checks performed on the content of the email. For the best delivery results, it is advised to lower your spam score as much as possible.

Subscribe Now

Flesch reading score

Flesch reading score measures how complex a text is. The lower the score, the more difficult the text is to read. The Flesch readability score uses the average length of your sentences (measured by the number of words) and the average number of syllables per word in an equation to calculate the reading ease. Text with a very high Flesch reading ease score (about 100) is straightforward and easy to read, with short sentences and no words of more than two syllables. Usually, a reading ease score of 60-70 is considered acceptable/normal for web copy.

Subscribe Now

Technologies

What powers this email? Every email we receive is parsed to determine the sending ESP and any additional email technologies used.

Subscribe Now

Email Size (not include images)

Font Used

No. Font Name
Subscribe Now

Copyright © 2019–2024 SimilarMail.