Remix.run Logo
Oras 4 hours ago

This would work on people too, you can see daily fake info/text/videos and many people believing in them.

LLMs do not think, why this is still hard to understand? They just spit out whatever data they analyse and trained on.

I feel this kind of articles is aimed at people who hate AI and just want to be conformable within their own bias.

simmerup 4 hours ago | parent [-]

The journals the scientist submitted had a fake university, explicitly fake people, references to the simpsons and star trek, etc

Most doctors would not believe that, and would also consider any new eye disease they’d never see in real life with scepticism

kenjackson 3 hours ago | parent | next [-]

LLMs will need to develop a notion of trustworthiness. Interesting that part of the process of learning isn’t just learning, but also learning what to learn and how much value to put into data that crosses your path.

simmerup 2 hours ago | parent [-]

To me I think the problem is the blast radius

All of us are slightly wrong about things, but not all of us are treated as oracles of correct information like Opus, ChatGPT, etc are

Oras an hour ago | parent [-]

you're confusing LLMs with humans

hoppyhoppy2 3 hours ago | parent | prev [-]

Journals? The article says the article was uploaded to 2 preprint servers.

simmerup 3 hours ago | parent [-]

Sorry, even worse then

I got confused because a journal referenced them > The experiment’s reach has now spread into the published medical literature. The bixonimania research has been cited by a handful of researchers, including a study that appeared in Cureus, a journal published by Springer Nature, the publisher of Nature, by researchers at the Maharishi Markandeshwar Institute of Medical Sciences and Research in Mullana, India (S. Banchhor et al. Cureus 16, e74625 (2024); retraction 18, r223 (2026)). (Nature’s news team is editorially independent of its publisher.)