Remix.run Logo
missedthecue 5 days ago

"AI can hallucinate on any data you feed it, and it's been proven that it doesn't summarize, but rather abridges and abbreviates data."

Have you ever met a human? I think one of the biggest reasons people become bearish on AI is that their measure of whether it's good/useful is that it needs to be absolutely perfect, rather than simply superior to human effort.

autoexec 5 days ago | parent | next [-]

> one of the biggest reasons people become bearish on AI is that their measure of whether it's good/useful is that it needs to be absolutely perfect, rather than simply superior to human effort.

Meanwhile people bullish on AI don't care if it's perfect or even vastly inferior to human effort, they just want it to be less expensive/troublesome and easier to control than a human would be. Plenty of people would be fine knowing that AI fucks up regularly and ruins other people's lives in the process as long as in the end their profits go up or they can still get what they want out of it.

bdbdbdb 5 days ago | parent | prev | next [-]

I'm not saying it needs to be perfect, but the guy in this article is putting a lot of blind faith in an algorithm that's proven time and time again to make things up.

The reason I have become "bearish" on AI is because I see people repeatedly falling into a trap of believing LLMs are intelligent, and actively thinking, rather than just very very fine tuned random noise. We should pay attention to the A in AI more.

arevno 5 days ago | parent [-]

> putting a lot of blind faith in an algorithm that's proven time and time again to make things up

Don't be ridiculous. Our entire system of criminal justice relies HEAVILY on the eyewitness testimony of humans, which has been demonstrated time and again to be entirely unreliable. Innocents routinely rot in prison and criminals routinely go free because the human brain is much better at hallucinating than any SOTA LLM.

I can think of no more critical institution that ought to require fidelity of information than criminal justice, and yet we accept extreme levels of hallucination even there.

This argument is tired, played out, and laughable on its face. Human honesty and memory reliability are a disgrace, and if you wish to score points against LLMs, comparing their hallucination rates to those of humans is likely going to result in exactly the opposite conclusion that you intend others to draw.

1659447091 5 days ago | parent | next [-]

> the human brain is much better at hallucinating than any SOTA LLM

Aren't the models trained on human content and human intervention? If humans are hallucinating that content, then LLMs even slightly hallucinating from fallible human content, wouldn't that make the LLMs hallucinations still, if even slightly, more than humans? Or am I missing something here where LLMs are somehow correcting the original human hallucinations and thus producing less hallucinated content?

bdbdbdb 3 days ago | parent | prev [-]

Its ridiculous and laughable to say LLMs hallucinate because the justice system isn't flawless?

That's a cognitive leap.

bigstrat2003 5 days ago | parent | prev [-]

Right now AI is inferior, not superior, to human effort. That's precisely why people are bearish on it.

missedthecue 5 days ago | parent [-]

I don't think thats obvious. In 20 minutes for example, deep research can write a report on a given topic much better than an analyst can produce in a day or two. It's literally cheaper, better, and faster than human effort.

D-Machine 5 days ago | parent | next [-]

Faster? Yes. Cheaper? Probably, but you need to amortize in all the infrastructure and training and energy costs. Better? Lol no.

arevno 5 days ago | parent [-]

> but you need to amortize in all the infrastructure and training and energy costs

The average American human consumes 232kWh of all-in energy (food, transport, hvac, construction, services, etc) daily.

If humans want to get into a competition over lower energy input per unit of cognitive output, I doubt you'd like the result.

> Better? Lol no

The "IQ equivalent" of the current SOTA models (Opus 4.5, Gemini 3 Pro, GPT 5.2, Grok 4.1) is already a full 1SD above the human mean.

Nations and civilizations have perished or been conquered all throughout history because they underestimated and laughed off the relative strength of their rivals. By all means, keep doing this, but know the risks.

jrflowers 5 days ago | parent | prev [-]

What do you man by “better” in this context?

missedthecue 5 days ago | parent | next [-]

It synthesizes a more comprehensive report, using more sources, more varied sources, more data, and broader insights than a human analyst can produce in 1-2 days of research and writing.

I'm not confused about this. If you don't agree, I will assume it's probably because you've never employed a human to do similar work in the past. Because it's not particularly close. It's night and day. *Note that I'm not saying 20 minutes of deep research beats 9 months of investigative journalism with private interviews with primary sources or anything like that. I'm talking about asking an analyst on your team to do a deep dive into XYZ and have something on your desk tomorrow EOD.

freejazz 4 days ago | parent | next [-]

Weird, I'm an attorney and no one is getting rid of associates in order to have LLMs do the research, no less so when they actually hallucinate sources (something associates wont do). I can't imagine that being significantly different in other domains.

jrflowers 4 days ago | parent [-]

> I can't imagine that being significantly different in other domains.

It’s not. There is no industry where AI performs “better” than humans reliably without torturing the meaning of the word (for example, OP says AI is better at analysis iff the act of analysis does not include any form of communication to find or clarify information from primary sources)

jrflowers 5 days ago | parent | prev [-]

> It synthesizes a more comprehensive report, using more sources, more varied sources, more data, and broader insights than a human analyst can produce in 1-2 days of research and writing.

> Note that I'm not saying 20 minutes of deep research beats 9 months of investigative journalism with private interviews with primary sources or anything like that.

I like the idea that AI is objectively better at doing analysis if you simply assume that it takes a person nine months to make a phone call

fcantournet 5 days ago | parent | prev [-]

It has more words put together in seemingly correct sentences, so it's long enough his boss won't actually read it to proof it.