| ▲ | rectang 7 hours ago | |||||||
If the AI gets attached to a health insurer (not the case here as far as I know), I would expect it to make decisions that are aligned with the company’s incentive to weed out unprofitable patients. AI is not a human who takes a Hippocratic oath; it can be more easily manipulated to perform unethical acts. | ||||||||
| ▲ | stvltvs 7 hours ago | parent | next [-] | |||||||
AI is an overloaded term, so I'm not sure whether insurers are using LLMs or more traditional ML, but they are already using "AI" to deny claims. https://www.liveinsurancenews.com/health-insurance-claims-de... | ||||||||
| ▲ | PUSH_AX 7 hours ago | parent | prev [-] | |||||||
I don't think anyone would use an AI with such a severe conflict of interests, unless this was completely hidden from the user. | ||||||||
| ||||||||