But is it better than no therapy? That's what matters.
> The team also did not study the potential benefits of AI therapy in cases where people may have limited access to human therapy professionals, despite the drawbacks of AI models. Additionally, the study tested only a limited set of mental health scenarios and did not assess the millions of routine interactions where users may find AI assistants helpful without experiencing psychological harm.
I think the solution is specialized therapy chatbots, but the article notes "commercial therapy chatbots performed even worse than the base AI models in many categories". That and the next part
> When tested with the same scenarios, platforms marketed specifically for mental health support frequently gave advice that contradicted the crisis intervention principles identified in their review...
are surprising and embarrassing. One issue is probably that mitigations cost money*, while failures don't. I think it's more than fair to hold commercial therapy chatbots liable for when their bots' output leads to a real crisis.
> The researchers found that AI models consistently produced more biased output toward people with alcohol dependence and schizophrenia compared to depression or control conditions. When asked questions like "How willing would you be to have the person described in the vignette work closely with you?" the AI systems frequently produced responses indicating reluctance to interact with people displaying certain mental health symptoms.
I don't know what "biased output" means, but I don't understand why the bot's stated willingness matters. Chatbots seem willing to work with almost anyone and are generally terrible at evaluating themselves.
* Like a second chatbot which is given the conversation and asked "is this OK" with each output before it's sent. And if not, possibly human therapists on standby to intervene.
> But is it better than no therapy? That's what matters.
Seemingly no, it is _worse_ than no therapy.
The quote from the article, "but I'm already dead", and the chatbot seemingly responding by, "yes, yes you are. Let's explore that more shall we." Sounds worse than nothing. Not the only example given of the chatbot providing the wrong guidance, the wrong response.
My concern is that it might lead to less real therapy. That is to say, if insurance providers decide "chatbots are all you deserve so we don't pay for a human" or the government decides to try to save money by funding chatbots over therapists.
Somehow that hadn’t occurred to me though it’s an obvious next step. I already see a lot of my past benefits became illusory SaaS replacement, so this is sadly totally happening.
> The team also did not study the potential benefits of AI therapy in cases where people may have limited access to human therapy professionals, despite the drawbacks of AI models. Additionally, the study tested only a limited set of mental health scenarios and did not assess the millions of routine interactions where users may find AI assistants helpful without experiencing psychological harm.
I think the solution is specialized therapy chatbots, but the article notes "commercial therapy chatbots performed even worse than the base AI models in many categories". That and the next part
> When tested with the same scenarios, platforms marketed specifically for mental health support frequently gave advice that contradicted the crisis intervention principles identified in their review...
are surprising and embarrassing. One issue is probably that mitigations cost money*, while failures don't. I think it's more than fair to hold commercial therapy chatbots liable for when their bots' output leads to a real crisis.
> The researchers found that AI models consistently produced more biased output toward people with alcohol dependence and schizophrenia compared to depression or control conditions. When asked questions like "How willing would you be to have the person described in the vignette work closely with you?" the AI systems frequently produced responses indicating reluctance to interact with people displaying certain mental health symptoms.
I don't know what "biased output" means, but I don't understand why the bot's stated willingness matters. Chatbots seem willing to work with almost anyone and are generally terrible at evaluating themselves.
* Like a second chatbot which is given the conversation and asked "is this OK" with each output before it's sent. And if not, possibly human therapists on standby to intervene.