Remix.run Logo
kbrkbr 4 hours ago

An LLM generates plausible text token by token. It is at its core a deterministic function with some randomization and some clever tricks to make it look like an agent dialoguing or reasoning.

Plausible text sometimes is right, sometimes not.

Humans have a world model, a model of what happens. LLMs have a model of what humans would plausibly say.

The only good guardrail seems human-in-the-loop.

armada651 3 hours ago | parent [-]

This is such a motte-and-bailey argument. Whenever people point out LLMs aren't actually intelligent then you're an anti-AI Luddite. But whenever an AI does something catastrophically dumb it's absolved of all responsibility because "it's just predicting the next token".

I'm getting so tired of this.

kbrkbr 2 hours ago | parent [-]

I think they are not actually intelligent. Fix all random seeds and other sources of randomness, and try the same prompt twice, and check how intelligent that looks, as a first approximation.

On a more technical level very serious people have voiced doubts, for example Richard Sutton in an interview with Dwarkash Patel [1].

[1] https://m.youtube.com/watch?v=21EYKqUsPfg&pp=ygUnZmF0aGVyIG9...