Remix.run Logo
SecretDreams a day ago

LLMs for medical info are good, but they're easily abuseable. I've got a friend who is an anxious mom. They use gpt/Gemini to "confirm" all of their suspicions and justify far more doctor/medical visits than is at all reasonable, while also getting access to more recurring antibiotics than is reasonable. LLMs are basically giving them the gun powder to waste the doctor's time and slam an already stressed medical system when all their kids need most of the time is some rest and soup.

ramoz a day ago | parent | next [-]

Yea, I'm in a particular health community. A lot of anxious individuals, for good reason, end up posting a lot of nonsense they derived from self-influenced chatgpt conversations.

That said, when used as a tool you have power over - ChatGPT has also freed up some of my own anxiety. I've learned a ton thanks to ChatGPT as well. It's often been more helpful than the doctors and offers itself as an always-available counsel.

accrual 16 hours ago | parent [-]

Another user above described the curve as K-shaped and that resonates to me as well. Above a certain line of knowledge and discernment the user is likely to benefit from the tool. Below the line, the tool can become harmful.

hsuduebc2 a day ago | parent | prev [-]

Yeah, it’s a very powerful tool, and it needs to be used carefully and with intent. People on Hacker News mostly get that already, but for ordinary users it’s a full-on paradigm shift.

It moved from: A very precise source of information, where the hardest part was finding the right information.

To: Something that can produce answers on demand, where the hardest part is validating that information, and knowing when to doubt the answer and force it to recheck the sources.

This happened in a year or two so I can't really blame. The truth machine where you doesn't needed to focus too much on validating answers changed rapidly to slop machine where ironically, your focus is much more important.

JumpCrisscross 13 hours ago | parent | next [-]

> People on Hacker News mostly get that already

It’s super easy to stop fact checking these AIs and just trust they’re reading the sources correctly. I caught myself doing it, went back and fact checked past conversations, and lo and behold in two cases shit was made up.

These models are built to engage. They’re going to reinforce your biases, even without evidence, because that’s flattering and triggers a dopamine hit.

SecretDreams a day ago | parent | prev [-]

> This happened in a year or two so I can't really blame. The truth machine where you doesn't needed to focus too much on validating answers changed rapidly to slop machine where ironically, your focus is much more important.

Very much this for the general public. I view it as borderline* dangerous to anyone looking for confirmation bias.

hsuduebc2 a day ago | parent [-]

Yea. Especially with absolutele garbage that is gooogle ai summary, which is just slightly worse than their "AI mode". I never saw anything hallucinate that much. It is much worse that it is included in every search and it have the google "stamp of quality" which was usually mark of well functioning product.

SecretDreams a day ago | parent | next [-]

It's funny because their thinking* Gemini with good prompting is solid, but the injected summaries they give could easily be terrible if the people doing the querying is lacking a certain base knowledge on the query.

hsuduebc2 a day ago | parent | prev [-]

And tiny text at the bottom which shows only after clicking "show more" statement "Al responses may include mistakes" will certainly not fix that.

At least wording should be "is making mistakes" rather vaguely stating that it may occasionally in some cases produce mistake. Mistake can also be perceived as wrongly placed link and not absolutely made up information.