| ▲ | hackitup7 12 hours ago | |||||||||||||||||||
I've had a similar positive experience and I'm really surprised at the cynicism here. You have a system that is good at reading tons of literature and synthesizing it, which then applies basic logic. What exactly do the cynics think that doctors do? I don't use LLMs as the final say, but I do find them pretty useful as a positive filter / quick gut check. | ||||||||||||||||||||
| ▲ | EagnaIonat 8 hours ago | parent | next [-] | |||||||||||||||||||
This is the crux of the argument from the article. > get to know your members even before the first claim Basically selling your data to maximise profits from you and ensure companies don't take on a burden. You are also not protected by HIPAA using ChatGPT. | ||||||||||||||||||||
| ||||||||||||||||||||
| ▲ | mattmanser 8 hours ago | parent | prev [-] | |||||||||||||||||||
Because we've all used LLMs. The make stuff up. Doctors do not make stuff up. They agree with you. Almost all the time. If you ask an AI whether you have in fact been infected by a werewolf bite, they're going to try and find a way to say yes. | ||||||||||||||||||||
| ||||||||||||||||||||