Remix.run Logo
Grimblewald 8 hours ago

I find people rarely have useful definitions for intelligence and the ontological units clustered around the term change significantly from person to person.

That said, LLMs have a single specific inductive bias: Translation. But not just between languages, between ontologies themsleves. Whether it’s 'Idea -> Python' or 'Intent -> Prose,' the model is performing a cross-modal mapping of conceptual structures. This does require a form of intelligence, of reasoning, just in a format suitable to a world so alien to our own that they're mutually unintelligble, even if the act of charting ontologies is shared between them.

This is why I think we’re seeing diminishing returns, it is that we’re trying to 'scale' our way into AGI using a map-maker/navigation system. Like asking google maps to make you a grocery list, rather than focusing on its natural purpose in being able to tell you where you can find groceries. You can make a map so detailed it includes every atom, but the map will never have the agency to walk across the room. We are seeing asymptotic gains because each extra step toward 'behavioral' AGI is exponentially more expensive when you're faking reasoning through high-dimensional translation.