
If you were depressed, would you be willing to seek therapy from a bot instead of a human? What if doing so was your only practical remedy? It’s less expensive, more convenient, accessible 24/7 and private. So, why not?
As the demand for professional therapy continues outpacing the supply of therapists, this is not merely a hypothetical question. A recent study supports the efficacy of AI-based therapy.
Earlier this year, the New England Journal of Medicine published an article about a Dartmouth study comparing outcomes of one month of AI-based therapy compared with no therapy for people suffering from depression and other mental health problems.
The result? According to Nick Jacobson, one of the researchers, the bot ultimately delivered outcomes similar to the “best evidence-based trials of psychotherapy.” Patients developed a “strong relationship with an ability to trust” the digital therapist, he said.
But this study has been criticized because Jacobson’s team had developed the AI-bot used in it, and the methodology was questionable because one group received therapy while the control group received none. Doing something therapeutic usually helps people feel better than doing nothing.
That AI is Therabot, developed “by a team of psychologists, therapists and AI experts who are dedicated to providing accessible mental health support.” Promoted as “your compassionate digital partner,” here is how it works, as paraphrased from its website:
Users message the chatbot, expressing their feelings and concerns. The chatbot establishes a safe, nonjudgmental space. Using natural language processing to understand and interpret the user’s text, the chatbot recognizes emotions, issues and nuances in the user’s messages. Based on this analysis, the chatbot generates personalized responses. Using therapeutic techniques, it addresses the user’s needs. The chatbot engages in an ongoing dialogue, asking follow-up questions, offering further support, deepening the conversation and providing a more comprehensive therapeutic experience.
AI therapy presents risks, though. Arthur C. Evans Jr., the chief executive of the American Psychological Association, testified to the Federal Trade Commission: “They are actually using algorithms that are antithetical to what a trained clinician would do. Our concern is that more and more people are going to be harmed. People are going to be misled, and will misunderstand what good psychological care is.”
Critics say that these chatbots are designed to learn from the user and build strong emotional bonds in the process, often by mirroring and amplifying the interlocutor’s beliefs, thereby tending to align with users’ views, known as “sycophancy,” to users’ detriment. Furthermore, critics say, there is no substitute for the human connection — for actual, rather than digital, empathy.
There is good reason one therapeutic chatbot includes this all-caps warning: “THIS IS NOT A REAL PERSON OR LICENSED PROFESSIONAL. NOTHING SAID HERE IS A SUBSTITUTE FOR PROFESSIONAL ADVICE, DIAGNOSIS OR TREATMENT.”
“Therapy” is from Greek “therapeia,” a person providing services to the sick. “Bot” is short for “robot,” from the Czech word “robotnik” meaning forced worker; as of 1923, a “robot” is a mechanical person. Can a bot ever effectively masquerade as a human?
Read more on The Berkshire Eagle

