Remix.run Logo
PeterCorless 4 days ago

Correct. It is more provider-oriented proscription ("You can't say your chatbot is a therapist.") It is not a limitation on usage. You can still, for now, slavishly fall in love with your AI and treat it as your best friend and therapist.

There is a specific section that relates to how a licensed professional can use AI:

Section 15. Permitted use of artificial intelligence.

(a) As used in this Section, "permitted use of artificial intelligence" means the use of artificial intelligence tools or systems by a licensed professional to assist in providing administrative support or supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection (b).

(b) No licensed professional shall be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless:

(1) the patient or the patient's legally authorized representative is informed in writing of the following:

(A) that artificial intelligence will be used; and

(B) the specific purpose of the artificial intelligence tool or system that will be used; and

(2) the patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.

Source: Illinois HB1806

https://www.ilga.gov/Legislation/BillStatus/FullText?GAID=18...

janalsncm 4 days ago | parent | next [-]

I went to the doctor and they used some kind of automatic transcription system. Doesn’t seem to be an issue as long as my personal data isn’t shared elsewhere, which I confirmed.

Whisper is good enough these days that it can be run on-device with reasonable accuracy so I don’t see an issue.

WorkerBee28474 4 days ago | parent [-]

Last I checked, the popular medical transcription services did send your data to the cloud and run models there.

ceejayoz 4 days ago | parent [-]

Yes, but with extra contracts and rules in place.

lokar 4 days ago | parent [-]

At least in the us I think HIPPA would cover this, and IME medical providers are very careful to select products and services that comply.

heyjamesknight 4 days ago | parent | next [-]

Yes, but HIPAA is notoriously vague with regards to what actual security measures have to be in place. Its more of an agreement between parties as to who is liable in case of a breach than it is a specific set of guidelines like SOC 2.

If your medical files are locked in the trunk of a car, that’s “HIPAA-compliant” until someone steals the car.

fc417fc802 3 days ago | parent [-]

I think that's a good thing. I don't want a specific but largely useless checklist that absolves the party that ought to be held responsible. A hard guarantee of liability is much more effective at getting results.

It would be nice to extend the approximate equivalent of HIPAA to all personal data processing in all cases with absolutely zero exceptions. No more "oops we had a breach, pinky promise we're sorry, don't forget to reset all your passwords".

heyjamesknight 3 days ago | parent [-]

No disagreement. Its just something I point out when people are concerned about "HIPAA compliance."

My experience is that people tend to think its some objective level of security. But its really just the willingness to sign a BAA and then take responsibility for any breaches.

loeg 4 days ago | parent | prev [-]

It's "HIPAA."

esseph 4 days ago | parent [-]

It was just last week that I learned about HIPAA Hippo!

romanows 4 days ago | parent | prev [-]

Yes, but also "An... entity may not provide... therapy... to the public unless the therapy... services are conducted by... a licensed professional".

It's not obvious to me as a non-lawyer whether a chat history could be decided to be "therapy" in a courtroom. If so, this could count as a violation. Probably lots of law around this stuff for lawyers and doctors cornered into giving advice at parties already that might apply (e.g., maybe a disclaimer is enough to workaround the prohibition)?

germinalphrase 4 days ago | parent | next [-]

Functionally, it probably amounts to two restrictions: a chatbot cannot formally diagnose & a chatbot cannot bill insurance companies for services rendered.

lupire 4 days ago | parent | next [-]

Most "therapy" services are not providing a diagnosis. Diagnosis comes from an evaluation before therapy starts, or sometimes not at all. (You can pay to talk to someone without a diagnosis.)

The prohibition is mainly on accepting any payment for advertised therapy service, if not following the rules of therapy (licensure, AI guidelines).

Likewise for medicine and law.

bluefirebrand 4 days ago | parent [-]

Many therapy services have the ability to diagnose as therapy proceeds though

gopher_space 4 days ago | parent | prev [-]

After a bit of consideration I’m actually ok with codifying Bad Ideas. We could expand this.

fc417fc802 3 days ago | parent | prev | next [-]

These things usually (not a lawyer tho) come down to the claims being actively made. For example "engineer" is often (typically?) a protected title but that doesn't mean you'll get in trouble for drafting up your own blueprints. Even for other people, for money. Just that you need to make it abundantly clear that you aren't a licensed engineer.

I imagine "Pay us to talk to our friendly chat bot about your problems. (This is not licensed therapy. Seek therapy instead if you feel you need it.)" would suffice.

4 days ago | parent | prev | next [-]
[deleted]
pessimizer 4 days ago | parent | prev [-]

For a long time, Mensa couldn't give people IQ scores from the tests they administered because somehow, legally, they would be acting medically. This didn't change until about 10 years ago.

Defining non-medical things as medicine and requiring approval by particular private institutions in order to do them is simply corruption. I want everybody to get therapy, but there's no difference in outcomes whether you get it from a licensed therapist using some whacked out paradigm that has no real backing, or from a priest. People need someone to talk to who doesn't have unclear motives, or any motives really, other than to help. When you hand money to a therapist, that's nearly what you get. A priest has dedicated his life to this.

The only problem with therapists in that respect is that there's an obvious economic motivation to string a patient along forever. Insurance helps that by cutting people off at a certain point, but that's pretty brutal and not motivated by concern for the patient.

watwut 4 days ago | parent [-]

If you think human therapists intentionally string patients forever, wait to see what tech people can achieve with gamified therapists literally A/B tested to string people along. Oh, and we will then blame the people for "choosing" to engage with that.

Also, the proposition is dubious, because there are waitlists for therapists. Plus, therapist can actually loose the license while the chatbot cant, no matter how bad the chatbot gets.

fl0id 4 days ago | parent [-]

This. At least here therapists don’t have a problem getting new patients.