Remix.run Logo
dataviz1000 5 hours ago

LLM models can only predict the next token.

The can't predict the consequences of an action predicting one token after another. They can't solve a Rubik's Cube unlike a 7 year old human who can learn to do it in a weekend. They can't imagine the perspective of being a human being unlike a 7 year old human if asked to imagine they where in the position of another human.

DoctorOetker 5 hours ago | parent [-]

Those are very strong claims, do you really believe an LLM can't be trained to solve Rubik's Cubes?

Can you imagine what if feels like to be a LLM?

Can one LLM have a better sensation of what it feels like to be a different LLM (say one that score a little better?)?

You design circularly defined criteria...

r_lee 4 hours ago | parent [-]

honestly I'm pretty sure opus could solve a rubiks cube if you just gave it the layout if the sides and looped until it would solve it

or even just take a picture of the thing, since they can digest visual input now

4 hours ago | parent | next [-]
[deleted]
3 hours ago | parent | prev [-]
[deleted]