Remix.run Logo
sergiomattei 3 hours ago

Papers like these are much needed bucket of ice water. We antropomorphize these systems too much.

Skimming through conclusions and results, the authors conclude that LLMs exhibit failures across many axes we'd find to be demonstrative of AGI. Moral reasoning, simple things like counting that a toddler can do, etc. They're just not human and you can reasonably hypothesize most of these failures stem from their nature as next-token predictors that happen to usually do what you want.

So. If you've got OpenClaw running and thinking you've got Jarvis from Iron Man, this is probably a good read to ground yourself.

Note there's a GitHub repo compiling these failures from the authors: https://github.com/Peiyang-Song/Awesome-LLM-Reasoning-Failur...

vagrantstreet 2 hours ago | parent | next [-]

Isn't it strange that we expect them to act like humans even though after a model was trained it remains static? How is this supposed to be even close to "human like" anyway

mettamage 2 hours ago | parent | next [-]

> Isn't it strange that we expect them to act like humans even though after a model was trained it remains static?

An LLM is more akin to interacting with a quirky human that has anterograde amnesia because it can't form long-term memories anymore, it can only follow you in a long-ish conversation.

LiamPowell 2 hours ago | parent | prev [-]

If we could reset a human to a prior state after a conversation then would conversations with them not still be "human like"?

I'm not arguing that LLMs are human here, just that your reasoning doesn't make sense.

hackinthebochs an hour ago | parent [-]

Henry Molaison was exactly this.

otabdeveloper4 37 minutes ago | parent | prev | next [-]

> We antropomorphize these systems too much.

They're sold as AGI by the cloud providers and the whole stock market scam will collapse if normies are allowed to peek behind the curtain.

lostmsu an hour ago | parent | prev [-]

https://en.wikipedia.org/wiki/List_of_cognitive_biases

Specifically, the idea that LLMs fail to solve some tasks correctly due to fundamental limitations where humans also fail periodically well may be an instance of the fundamental attribution error.