Remix.run Logo
josefritzishere 7 hours ago

It continues to amaze me how recklessly some people cram AI into spaces where it performs poorly and the consequences include death.

y-c-o-m-b 5 hours ago | parent | next [-]

As a software dev that uses it and observes the many errors it makes on a daily basis, I definitely treat the output with a much greater deal of skepticism than the average person I speak with. If you're used to it providing relatively accurate results based on surface level google-eqsue searches, then it makes sense why you'd place a higher weight on it being an "expert" vs a "tool that needs verification". I understand why people fall into this mindset.

I used ChatGPT to do a valve adjustment on an engine; a task I've never done before. I didn't just accept the torque values and procedure it told me though, because I know better from my experience with it as a dev. I cross-referenced it all with Youtube videos, forum posts, instruction manuals (where available) to make sure the job was A) doable for a non-mechanic like me and B) done correctly. Thanks to the Youtube video (which I cross-referenced with other sources), I discovered the valve clearance values were slightly off with the ChatGPT recommendation.

I think the average Joe would assume these values were correct and run with it.

rectang 7 hours ago | parent | prev | next [-]

If the AI gets attached to a health insurer (not the case here as far as I know), I would expect it to make decisions that are aligned with the company’s incentive to weed out unprofitable patients. AI is not a human who takes a Hippocratic oath; it can be more easily manipulated to perform unethical acts.

stvltvs 7 hours ago | parent | next [-]

AI is an overloaded term, so I'm not sure whether insurers are using LLMs or more traditional ML, but they are already using "AI" to deny claims.

https://www.liveinsurancenews.com/health-insurance-claims-de...

PUSH_AX 7 hours ago | parent | prev [-]

I don't think anyone would use an AI with such a severe conflict of interests, unless this was completely hidden from the user.

rectang 5 hours ago | parent [-]

With an integrated insurer/provider, they just have to make primary care scarce so that it takes months to get an appointment, and then offer AI Doctor as an option. Not all patients have to use it for it to be cost effective.

TZubiri 7 hours ago | parent | prev [-]

But it doesn't perform poorly actually, it's just that the stakes are very high and it's a highly regulated environment.

Most physicians I know use ChatGPT. Although of course it's usage guided by an expert, not by the patient, nor fully autonomous.