▲ | zdragnar 5 days ago | ||||||||||||||||||||||
Whether the Hippocratic oath, the rules of the APA or any other organization, most all share "do no harm" as a core tenant. LLMs cannot conform to that rule because they cannot distinguish between good advice and enabling bad behavior. | |||||||||||||||||||||||
▲ | dcrazy 5 days ago | parent | next [-] | ||||||||||||||||||||||
The counter argument is that’s just a training problem, and IMO it’s a fair point. Neural nets are used as classifiers all the time; it’s reasonable that sufficient training data could produce a model that follows the professional standards of care in any situation you hand it. The real problem is that we can’t tell when or if we’ve reached that point. The risk of a malpractice suit influences how human doctors act. You can’t sue an LLM. It has no fear of losing its license. | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | glenstein 5 days ago | parent | prev | next [-] | ||||||||||||||||||||||
>LLMs cannot conform to that rule because they cannot distinguish between good advice and enabling bad behavior. I understand this as a precautionary approach that's fundamentally prioritizing the mitigation of bad outcomes and a valuable judgment to that end. But I also think the same statement can be viewed as the latest claim in the traditional debate of "computers can't do X." The credibility of those declarations is under more fire now than ever before. Regardless of whether you agree that it's perfect or that it can be in full alignment with human values as a matter of principle, at a bare minimum it can and does train to avoid various forms of harmful discourse, and obviously it has an impact judging from the voluminous reports and claims of noticeably different impact on user experience that models have depending on whether they do or don't have guardrails. So I don't mind it as a precautionary principle, but as an assessment of what computers are in principle capable of doing it might be selling them short. | |||||||||||||||||||||||
▲ | moralestapia 5 days ago | parent | prev | next [-] | ||||||||||||||||||||||
Neither most of the doctors I've talked to in the past like ... 20 years or so. | |||||||||||||||||||||||
▲ | SoftTalker 5 days ago | parent | prev [-] | ||||||||||||||||||||||
Having an LLM as a friend or therapist would be like having a sociopath for those things -- not that an LLM is necessarily evil or antisocial, but they certainly meet the "lacks a sense of moral responsibility or social conscience" part of the definition. |