| ▲ | fc417fc802 10 hours ago | |
Agreed about the conflation. But that drives home that there isn't some historic commonly and widely accepted definition for AGI whose goal posts are being moved. What there was doesn't match the new developments and was also often quite flawed to begin with. > LLM models, ... outdo any current self-driving car How would an LLM handle computer vision? Are you implicitly including a second embedding model there? But I think that's still the wrong sort of vision data for precise control, at least in general. How do you propose to handle the model hallucinating? What about losing its train of thought? | ||
| ▲ | docjay an hour ago | parent [-] | |
True that there isn’t a firm definition for AGI, but that’s the fault of the “I”. We don’t have an objective definition of intelligence, and so we don’t have a means of measuring it either. I mean, odds are you’re the least intelligent paleoethnobotanist and cetacean bioacoustician I’ve ever met, but perhaps the most intelligent something_else. How do we measure that? How do we define it? I was confusing in my previous message. Right now it would be terrible at driving a car, but I was saying that has more to do with the physical interface (camera, sensors, etc) than the ability of an LLM. The ‘intelligence’ part is better than the PyTorch image recognition attached to a servo they’re using now, how to attach that ‘intelligence’ to the physical world is the 50 year task. (To be clear: LLMs aren’t intelligent, smart, or any sense of the word and never will be. But they can sure replicate the effect better than current self-driving tech.) | ||