Remix.run Logo
azrazalea_debt 7 days ago

A lot of current LLM work is basically emergent behavior. They use a really simple core algorithm and scale it up, and interesting things happen. You can read some of anthropic's recent papers to see some of this, like: They didn't expect LLMs could "lookahead" when writing poetry. However, when they actually went in and watched what was happening (there's details on how this "watching" works on their blog/in their studies) they found the LLM actually was planning ahead! That's emergent behavior, they didn't design it to do that, it just started doing due to the complexity of the model.

If (BIG if) we ever do see actual AGI, it is likely to work like this. It's unlikely we're going to make AGI by designing some grand Cathedral of perfect software, it is more likely we are going to find the right simple principles to scale big enough to have AGI emerge. This is similar.

mrspuratic 6 days ago | parent | next [-]

On that topic, it seems backwards to me: intelligence is not emergent behaviour of language, rather the opposite.

6 days ago | parent | next [-]
[deleted]
danans 6 days ago | parent | prev [-]

Perception and interpretation can very much be influenced by language (Sapir-Wharf hypothesis), so to the extent that perception and interpretation influence intelligence, it's not clear that the relationship is only in one direction.

pinoy420 7 days ago | parent | prev [-]

[dead]