Remix.run Logo
mensetmanusman 4 days ago

What if it works a third as well as a therapists but is 20 times cheaper?

What word should we use for that?

inetknght 4 days ago | parent | next [-]

> What if it works a third as well as a therapists but is 20 times cheaper?

When there's studies that show it, perhaps we might have that conversation.

Until then: I'd call it "wrong".

Moreover, there's a lot more that needs to be asked before you can ask for a one-word summary disregarding all nuance.

- can the patient use the AI therapist on their own devices and without any business looking at the data and without network connection? Keep in mind that many patients won't have access to the internet.

- is the data collected by the AI therapist usable in court? Keep in mind that therapists often must disclose to the patient what sort of information would be usable, and whether or not the therapist themselves must report what data. Also keep in mind that AIs have, thus far, been generally unable to competently prevent giving dangerous or deadly advice.

- is the AI therapist going to know when to suggest the patient talk to a human therapist? Therapists can have conflicts of interest (among other problems) or be unable to help the patient, and can tell the patient to find a new therapist and/or refer the patient to a specific therapist.

- does the AI therapist refer people to business-preferred therapists? Imagine an insurance company providing an AI therapist that only recommends people talk to therapists in-network instead of considering any licensed therapist (regardless of insurance network) appropriate for the kind of therapy; that would be a blatant conflict of interest.

Just off the top of my head, but there are no doubt plenty of other, even bigger, issues to consider for AI therapy.

Ukv 4 days ago | parent [-]

Relevant RCT results I saw a while back seemed promising: https://ai.nejm.org/doi/full/10.1056/AIoa2400802

> can the patient use the AI therapist on their own devices and without any business looking at the data and without network connection? Keep in mind that many patients won't have access to the internet.

Agree that data privacy would be one of my concerns.

In terms of accessibility, while availability to those without network connections (or a powerful computer) should be an ideal goal, I don't think it should be a blocker on such tools existing when for many the barriers to human therapy are considerably higher.

lupire 4 days ago | parent | next [-]

I see an abstract and a conclusion that is an opaque wall of numbers. Is the paper available?

Is the chatbot replicatable from sources?

The authors of the study highlight the extreme unknown risks: https://home.dartmouth.edu/news/2025/03/first-therapy-chatbo...

inetknght 4 days ago | parent | prev [-]

> In terms of accessibility, I don't think it should be a blocker on such tools existing

I think that we should solve for the former (which is arguably much easier and cheaper to do) before the latter (which is barely even studied).

Ukv 4 days ago | parent [-]

Not certain which two things you're referring to by former/latter:

"solve [data privacy] before [solving accessibility of LLM-based therapy tools]": I agree - the former seems a more pressing issue and should be addressed with strong data protection regulation. We shouldn't allow therapy chatbot logs to be accessed by police and used as evidence in a crime.

"solve [accessibility of LLM-based therapy tools] before [such tools existing]": It should be a goal to improve further, but I don't think it makes much sense to prohibit the tools based on this factor when the existing alternative is typically less accessible.

"solve [barriers to LLM-based therapy tools] before [barriers to human therapy]": I don't think blocking progress on the latter would make the former happen any faster. If anything I think these would complement each other, like with a hybrid therapy approach.

"solve [barriers to human therapy] before [barriers to LLM-based therapy tools]": As above I don't think blocking progress on the latter would make the former happen any faster. I also don't think barriers to human therapy are easily solvable, particularly since some of it is psychological (social anxiety, or "not wanting to be a burden").

zaptheimpaler 4 days ago | parent | prev | next [-]

This is the key question IMO, and one good answer is in this recent video about a case of ChatGPT helping someone poison themselves [1].

A trained therapist will probably not tell a patient to take “a small hit of meth to get through this week”. A doctor may be unhelpful or wrong, but they will not instruct you to replace salt with NaBr and poison yourself. "third as well as as therapist" might be true on average, but the suitability of this thing cannot be reduced to averages. Trained humans don't make insane mistakes like that and they know when they are out of their depth and need to consult someone else.

[1] https://www.youtube.com/watch?v=TNeVw1FZrSQ

_se 4 days ago | parent | prev | next [-]

"A really fucking bad idea"? It's not one word, but it is the most apt description.

ipaddr 4 days ago | parent [-]

What if it works 20x better. For examples cases of patients being afraid of talking to professionals I could see this working much better.

jakelazaroff 4 days ago | parent | next [-]

> What if it works 20x better.

But it doesn't.

prawn 4 days ago | parent | prev | next [-]

Adjust regulation when that's the case? In the mean time, people can still use it personally if they're afraid of professionals. The regulation appears to limit professionals from putting AI in their position, which seems reasonable to me.

throwaway291134 4 days ago | parent | prev [-]

Even if you're afraid of talking to people, trusting OpenAI or Google with your thoughts over a professional who'll lose his license if he breaks confidentiality is no less of "a really fucking bad idea".

6gvONxR4sf7o 4 days ago | parent | prev | next [-]

Something like this can only really be worth approaching if there was an analog to losing your license for it. If a therapist screws up badly enough once, I'm assuming they can lose their license for good. If people want to replace them with AI, then screwing up badly enough should similarly lose that AI the ability to practice for good. I can already imagine companies behind these things saying "no, we've learned, we won't do it again, please give us our license back" just like a human would.

But I can't imagine companies going for that. Everyone seems to want to scale the profits but not accept the consequences of the scaled risks, and increased risks is basically what working a third as well amounts to.

lupire 4 days ago | parent [-]

AI gets banned for life: tomorrow a thousand more new AIs appear.

pawelmurias 4 days ago | parent | prev | next [-]

You could talk to a stone for even cheaper with way better effects.

knuppar 4 days ago | parent | prev | next [-]

Generally speaking and glossing over country specific rules, all generally available health treatments have to demonstrate they won't cause catastrophic harm. This is a harness we simply can't put around LLMs today.

fzeroracer 4 days ago | parent | prev | next [-]

If it works 33% of the time for people and then drives people to psychosis in the other 66% of the time, what word would you use for that?

BurningFrog 4 days ago | parent | prev | next [-]

Last I heard, most therapy doesn't work that well.

amanaplanacanal 4 days ago | parent | next [-]

If you have some statistics you should probably post a link. I've heard all kinds of things, and a lot of them were nothing like factual.

BurningFrog 3 days ago | parent [-]

I don't, but what I remember from when I looked into this is that usually, people have the same problems after the therapy as they had before. Some get better, some get worse, and it's hard to tell what's a real effect.

One exception was certain kinds of CBD therapies, for certain kinds of people.

baobabKoodaa 4 days ago | parent | prev [-]

Then it will be easy to work at least 1/3 as well as that.

thrown-0825 4 days ago | parent | prev | next [-]

Just self diagnose on tiktok, its 100x cheaper.

randall 4 days ago | parent | prev | next [-]

i’ve had a huge amount of trauma in my life and i find myself using chat gpt as kind of a cheater coach thing where i know i’m feeling a certain way, i know it’s irrational, and i don’t really need to reflect on why it’s happening or how i can fix it, and i think for that it’s perfect.

a lot of people use therapists as sounding boards, which actually isn’t the best use of therapy imo.

moooo99 4 days ago | parent | next [-]

Probably comes down to what issue people have. For example, if you have anxiety or/and OCD, having a „therapist“ always at your disposal is more likely to be damaging than beneficial. Especially considering how basically all models easily tip over and confirm anything you throw at them

smt88 4 days ago | parent | prev | next [-]

Your use-case is very different from someone selling you ChatGPT as a therapist and/or telling you that it's a substitute for other interventions

lupire 4 days ago | parent | prev [-]

What's a "cheater coach thing"?

Denatonium 4 days ago | parent | prev | next [-]

Whiskey

filoeleven 3 days ago | parent | prev | next [-]

Big if.

pengaru 4 days ago | parent | prev | next [-]

[flagged]

pessimizer 4 days ago | parent | prev [-]

Based on the Dodo Bird Conjecture*, I don't even think there's a reason to think that AI would do any worse than human therapists. It might even be better because the distressed person might hold less back from a soulless machine than they would for a flesh and blood person. Not that this is rational, because everything they tell an AI therapist can be logged, saved forever and combed through.

I think that ultimately the word we should use for this is "lobbying." If AI can't be considered therapy, that means that a bunch of therapists, no more effective than Sunday school teachers, working from extremely dubious frameworks** will not have to compete with it for insurance dollars or government cash. Since that cash is a fixed demand (or really a falling one), the result is that far fewer people will get any mental illness treatment at all. In Chicago, virtually all of the city mental health services were closed by Rahm Emmanuel. I watched a man move into the doorway of an abandoned building across from the local mental health center within weeks after it had been closed down and leased to an "tech incubator." I wondered if he had been a patient there. Eventually, after a few months, he was gone.

So if I could ask this question again, I'd ask: "What if it works 80%-120% as well as a therapist but is 100 or 1000 times cheaper?" My tentative answer would be that it would be suppressed by lobbyists employed by some private equity rollup that has already or will soon have turned 80% of therapists into even lower-paid gig workers. The place you would expect this to happen first was Illinois, because it is famously one of the most corruptly governed states in the country.***

Our current governor, absolutely terrible but at the same time the best we've had in a long while, tried to buy Obama's Senate seat from a former Illinois governor turned goofy national cultural figure and Trump ass-kisser in a ploy to stay out of prison (which ultimately delivered.) You can probably listen to the recordings now, unless they've been suppressed. I had a recording somewhere years ago, because I worked in a state agency under Blagojevich and followed everything in realtime (including pulling his name off of the state websites I managed the moment he was impeached. We were all gathered around the television in a conference room.)

edit: feel like I have to add that this comment was written my me, not AI. Maybe I'm flattering myself to think anybody would make the mistake.

-----

[*] Westra, H. A. (2022). The implications of the Dodo bird verdict for training in psychotherapy: prioritizing process observation. Psychotherapy Research, 33(4), 527–529. https://doi.org/10.1080/10503307.2022.2141588

[**] At least Freud is almost completely dead, although his legacy blackens world culture.

[***] Probably the horrific next step is that the rollup lays off all the therapists and has them replaced with an AI they own, after lobbying against the thing that they previously lobbied for. Maybe they sell themselves to OpenAI or Anthropic or whoever, and let them handle that phase.