▲ | jaredklewis 5 days ago | |||||||
So I’m fine with being able to fix some specified plumbing issue as being the AGI test, but it probably also means that humans don’t have AGI since it won’t be hard to find humans that can’t. But it doesn’t matter because that’s not the issue. The issue is that unless we all agree on that definition, then debates about AGI are just semantic equivocating. We all have our own idiolects of what AGI means for us, but like who cares? What everyone can agree on is that LLM agents cannot do plumbing now. This is observable and tells us interesting information about the capabilities of LLM agents. | ||||||||
▲ | Jensson 4 days ago | parent [-] | |||||||
> but it probably also means that humans don’t have AGI since it won’t be hard to find humans that can’t. Humans can learn to fix it. Learning is a part of intelligence. The biggest misconception is thinking that a humans intelligence is based on what they can do today and not what they can learn to do in 10 years. And since the AI model has already trained to completion when you use it, it should be able to do whatever any human can learn to do, or it should be able to learn. With this definition AGI is not that complicated at all. | ||||||||
|