| ▲ | ctoth 12 hours ago | |||||||||||||||||||||||||||||||||||||||||||||||||
I keep seeing this problem more and more with humans. What should we call it? Maybe Hallucinations? Where there is an accurate true thing and then it just gets altered by these guys who call themselves journalists and reporters and the like until it is just ... completely unrecognizable? I'm pretty sure it's a fundamental issue with the architecture. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | sethhochberg 12 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
I know this is written to be tounge-in-cheek, but its really almost the exact same problem playing out on both sides. LLMs hallucinate because training on source material is a lossy process and bigger, heavier LLM-integrated systems that can research and cite primary sources are slow and expensive so few people use those techniques by default. Lowest time to a good enough response is the primary metric. Journalists oversimplify and fail to ask followup questions because while they can research and cite primary sources, its slow and expensive in an infinitesimally short news cycle so nobody does that by default. Whoever publishes something that someone will click on first gets the ad impressions so thats the primary metric. In either case, we've got pretty decent tools and techniques for better accuracy and education - whether via humans or LLMs and co - but most people, most of the time don't value them. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | observationist 11 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
These writers are no different than bloggers or shitposters on bluesky or here on hackernews. "Journalism" as a rigorous, principled approach to writing, research, investigation, and ethical publishing is exceedingly rare. These people are shitposting for clicks in pursuit of a paycheck. Organizationally, they're intensely against AI because AI effectively replaces the entire talking heads class - AI is already superhuman at the shitposting level takes these people churn out. There are still a few journalistic insitutions out there, but most people are no better than a mad libs exercise with regards to the content they produce, and they're in direct competition with ChatGPT and Grok and the rest. I'd rather argue with a bot and do searches and research and investigation than read a neatly packaged trite little article about nearly any subject, and I guarantee, hallucinations or no, I'm going to come to a better understanding and closer approximation of reality than any content a so called "news" outlet is putting together. It's trivial to get a thorough spectrum of reliable sources using AI w/ web search tooling, and over the course of a principled conversation, you can find out exactly what you want to know. It's really not bashing, this article isn't too bad, but the bulk of this site's coverage of AI topics skews negative - as do the many, many platforms and outlets owned by Bell Media, with a negative skew on AI in general, and positive reinforcement of regulatory capture related topics. Which only makes sense - they're making money, and want to continue making money, and AI threatens that - they can no longer claim they provide value if they're not providing direct, relevant, novel content, and not zergnet clickbait journo-slop. Just like Carlin said, there doesn't have to be a conspiracy with a bunch of villains in a smoky room plotting evil, there's just a bunch of people in a club who know what's good for them, and legacy media outlets are all therefore universally incentivized to make AI look as bad and flawed and useless as possible, right up until they get what they consider to be their "fair share", as middlemen. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | pksebben 11 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
Whenever I hear arguments about LLM hallucination, this is my first thought. Like, I already can't trust the lion's share of information in news, social media, (insert human-created content here). Sometimes because of abject disinformation, frequently just because humans are experts at being wrong. At least with the LLM (for now) I know it's not trying to sell me bunkum or convince me to vote a particular way. Mostly. I do expect this state of affairs to last at least until next wednesday. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | terminalshort 12 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
Also these guys who call themselves doctors. I have narcolepsy and the first 10 or so doctors I went to hallucinated the wrong diagnosis. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | sans_souse 11 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
"Telephone", basically | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | awakeasleep 12 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
issue with the funding mechanism | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | busymom0 10 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
Isn't every single response by LLMs hallucinations and we just accept a few and ignore the others? | ||||||||||||||||||||||||||||||||||||||||||||||||||