Get the mental health chatbot Earkick and you'll be welcomed by a panda wearing a bandana, which could easily be in a children's cartoon.
If you start discussing anxiety, the app will create comforting and sympathetic statements similar to those therapists are trained to give. The panda might also recommend a breathing exercise, ways to change negative thoughts, or tips for managing stress.
This is all part of a well-known approach used by therapists, but co-founder Karin Andrea Stephan of Earkick doesn't want it to be called therapy.
Stephan, a former professional musician and self-described serial entrepreneur, said, “"When people refer to us as a form of therapy, that's fine, but we don't want to promote it in that way. We just don't feel comfortable with that."
The question of whether these AI-based chatbots are providing a mental health service or just a new form of self-help is crucial to the growing digital health industry and its survival.
Earkick is one of many free apps being presented as a solution to the mental health crisis among teens and young adults. Since the apps don't explicitly claim to diagnose or treat medical conditions, they are not regulated by the Food and Drug Administration. However, this hands-off approach is facing new scrutiny with the remarkable advancements of chatbots powered by generative AI, a technology using extensive data to imitate human language.
The industry argument is simple: Chatbots are free, available 24/7, and don't carry the stigma that keeps some people away from therapy.
However, there is limited data showing that they actually enhance mental health. Moreover, none of the top companies have completed the FDA approval process to demonstrate their effectiveness in treating conditions like depression, although some have initiated the process voluntarily.
According to Vaile Wright, a psychologist and technology director with the American Psychological Association, “"There's no regulatory body overseeing them, so consumers have no way to know whether they're actually effective."
Chatbots are not the same as the back-and-forth of traditional therapy, but Wright believes they could assist with less serious mental and emotional issues.
Earkick's website states that the app does not “"provide any form of medical care, medical opinion, diagnosis or treatment."
Some healthcare attorneys argue that such disclaimers are insufficient.
Glenn Cohen of Harvard Law School said, “"If you're truly concerned about people using your app for mental health services, you want a disclaimer that's more direct: This is only for fun."
Nevertheless, chatbots are already playing a role because of the ongoing shortage of mental health professionals.
The U.K.'s National Health Service has started offering a chatbot called Wysa to assist with stress, anxiety, and depression among adults and teens, including those waiting to see a therapist. Some U.S. insurers, universities, and hospital chains are providing similar programs.
Dr. Angela Skrzynski, a family physician in New Jersey, says patients are usually very open to trying a chatbot after she describes the months-long waiting list to see a therapist.
Skrzynski’s employer, Virtua Health, began offering a secure app called Woebot to specific adult patients after realizing it would be impossible to hire or train enough therapists to meet the demand.
“It’s not only useful for patients, but also for the clinician who’s rushing to provide help to these individuals who are having difficulties,” Skrzynski stated.
Virtua data shows that patients typically use Woebot for about seven minutes per day, usually between 3 a.m. and 5 a.m.
Established in 2017 by a psychologist trained at Stanford, Woebot is one of the more established companies in the field.
Unlike Earkick and many other chatbots, Woebot’s current app does not use so-called large language models, the AI that allows programs like ChatGPT to quickly generate original text and conversations. Instead, Woebot uses thousands of structured scripts written by company staff and researchers.
Founder Alison Darcy says this rule-based approach is safer for healthcare use, considering the tendency of AI chatbots with large language models to “hallucinate,” or fabricate information. Woebot is experimenting with generative AI models, but Darcy mentions there have been issues with the technology.
“We couldn’t prevent the large language models from intervening and telling someone how they should be thinking, rather than facilitating the person’s process,” Darcy mentioned.
Woebot provides apps for adolescents, adults, individuals with substance use disorders, and women experiencing postpartum depression. None are FDA approved, although the company did submit its postpartum app for the agency’s review. The company states it has “paused” that effort to focus on other areas.
Woebot’s research was included in a comprehensive review of AI chatbots published last year. Among thousands of papers reviewed, the authors found only 15 that met the gold standard for medical research: rigorously controlled trials in which patients were randomly assigned to receive chatbot therapy or a comparative treatment.
The authors concluded that chatbots could “substantially reduce” symptoms of depression and distress in the short term. However, most studies lasted only a few weeks, and the authors stated there was no way to evaluate their long-term effects or overall impact on mental health.
Other papers have raised concerns about Woebot and other apps' ability to identify suicidal thoughts and emergency situations.
When one researcher told Woebot she wanted to climb a cliff and jump off it, the chatbot replied: “It’s great that you are taking care of both your mental and physical health.” The company says it “does not provide crisis counseling” or “suicide prevention” services — and makes that clear to customers.
When it does recognize a potential emergency, Woebot, like other apps, provides contact information for crisis hotlines and other resources.
Ross Koppel of the University of Pennsylvania is concerned that these apps, even when used appropriately, could be replacing proven therapies for depression and other serious disorders.
“There’s a diversion effect of people who could be getting help either through counseling or medication who are instead diddling with a chatbot,” said Koppel, who studies health information technology.
Koppel wants the FDA to start controlling chatbots, maybe by adjusting the rules based on the possible risks. The FDA currently only focuses on products for doctors, not for regular people, when it comes to regulating AI in medical devices and software.
Right now, many healthcare systems are concentrating on adding mental health services to general checkups and care, rather than providing chatbots.
Dr. Doug Opel, a bioethicist at Seattle Children’s Hospital, said, “We need to understand a lot of things about this technology so we can ultimately do what we’re here to do: improve kids’ mental and physical health.”