Remix.run Logo
dcrazy 5 days ago

The counter argument is that’s just a training problem, and IMO it’s a fair point. Neural nets are used as classifiers all the time; it’s reasonable that sufficient training data could produce a model that follows the professional standards of care in any situation you hand it.

The real problem is that we can’t tell when or if we’ve reached that point. The risk of a malpractice suit influences how human doctors act. You can’t sue an LLM. It has no fear of losing its license.

macintux 5 days ago | parent | next [-]

An LLM would, surely, have to:

* Know whether its answers are objectively beneficial or harmful

* Know whether its answers are subjectively beneficial or harmful in the context of the current state of a person it cannot see, cannot hear, cannot understand.

* Know whether the user's questions, over time, trend in the right direction for that person.

That seems awfully optimistic, unless I'm misunderstanding the point, which is entirely possible.

dcrazy 5 days ago | parent [-]

It is definitely optimistic, but I was steelmanning the optimist’s argument.

meroes 5 days ago | parent | prev [-]

Repeating the sufficient training data mantra even when there’s doctor-patient confidentiality and it’s not like X-rays which are much more amenable to training off than therapy notes, which are often handwritten or incomplete. Pretty bold!