▲ | Jensson 4 days ago | |
> but it probably also means that humans don’t have AGI since it won’t be hard to find humans that can’t. Humans can learn to fix it. Learning is a part of intelligence. The biggest misconception is thinking that a humans intelligence is based on what they can do today and not what they can learn to do in 10 years. And since the AI model has already trained to completion when you use it, it should be able to do whatever any human can learn to do, or it should be able to learn. With this definition AGI is not that complicated at all. | ||
▲ | jaredklewis 4 days ago | parent [-] | |
That’s not what I was getting at. Did Stephen Hawking have GI? He was physically incapable of plumbing. There might be other limitations as well, but clearly the first hurdle between LLM agents and plumbing is any sort of interaction with the physical world. So a debate about AGI just becomes a debate about whether it includes interaction with the physical world and a billion other things. Anyone can redraw the semantic line anywhere that suits them. |