| ▲ | dataviz1000 5 hours ago | ||||||||||||||||||||||
LLM models can only predict the next token. The can't predict the consequences of an action predicting one token after another. They can't solve a Rubik's Cube unlike a 7 year old human who can learn to do it in a weekend. They can't imagine the perspective of being a human being unlike a 7 year old human if asked to imagine they where in the position of another human. | |||||||||||||||||||||||
| ▲ | DoctorOetker 5 hours ago | parent [-] | ||||||||||||||||||||||
Those are very strong claims, do you really believe an LLM can't be trained to solve Rubik's Cubes? Can you imagine what if feels like to be a LLM? Can one LLM have a better sensation of what it feels like to be a different LLM (say one that score a little better?)? You design circularly defined criteria... | |||||||||||||||||||||||
| |||||||||||||||||||||||