| ▲ | yewenjie 3 hours ago | |||||||
Interestingly ChatGPT right now answered > Bixonimania is not a real disease. It was deliberately invented by scientists as an experiment to test whether AI systems and researchers would spread false medical information. Here’s the simple explanation ... | ||||||||
| ▲ | latexr 3 hours ago | parent | next [-] | |||||||
It’s not that interesting, we know companies react to these things fast. It’s why I don’t share online my methods on how simple it is to show LLM flaws. The problem is all the lies which won’t be fessed up to. This one was because they had to to prove the point, but the bad actors with ulterior motives won’t reveal what they’re doing. | ||||||||
| ||||||||
| ▲ | rcxdude 3 hours ago | parent | prev [-] | |||||||
The news articles on it are going to affect this. I wonder if the original paper is in the base models at all, almost certainly these results were from the article showing up in an Internet search. Similarly, I wonder what a frontier model would say if just given the paper in isolation and asked to summarise/opine on it. I suspect they would successfully recognize such obivous signs, the failure is when less sophisticated LLMs are just skimming search results and summarising them. | ||||||||