| ▲ | A_D_E_P_T 4 hours ago | ||||||||||||||||||||||||||||||||||
Any serious LLM user will tell you that there's no way to get from LLM to AGI. These models are vast and, in many ways, clearly superhuman. But they can't venture outside their training data, not even if you hold their hand and guide them. Try getting Suno to write a song in a new genre. Even if you tell it EXACTLY what you want, and provide it with clear examples, it won't be able to do it. This is also why there have been zero-to-very-few new scientific discoveries made by LLM. | |||||||||||||||||||||||||||||||||||
| ▲ | pixl97 3 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
Can most people venture outside their training data? | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | uejfiweun 3 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
I mean yeah, but that's why there are far more research avenues these days than just pure LLMs, for instance world models. The thinking is that if LLMs can achieve near-human performance in the language domain then we must be very close to achieving human performance in the "general" domain - that's the main thesis of the current AI financial bubble (see articles like AI 2027). And if that is the case, you still want as much compute as possible, both to accelerate research and to achieve greater performance on other architectures that benefit from scaling. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||