Remix.run Logo
docjay 10 hours ago

Pop culture has spent its entire existence conflating AGI and ‘Physical AI’, so much so that the collective realization that they’re entirely different is a relatively recent thing. Both of them were so far off in the future that the distinction wasn’t worth considering, until suddenly one of them is kinda maybe sorta roughly here now…ish.

Artificial General Intelligence says nothing about physical ability, but movies with the ‘intelligence’ part typically match it with equally futuristic biomechanics to make the movie more interesting. AGI = Skynet, Physical AI = Terminator. The latter will likely be the hardest part, not only because it requires the former first, but because you can’t just throw more watts at a stepper motor and get a ballet dancer.

That said, I’m confident that if I could throw zero noise and precise “human sensory” level sensor data at any of the top LLM models, and their output was equally coupled to a human arm with the same sensory feedback, that it would definitely outdo any current self-driving car implementation. The physical connection is the issue, and will be for a long time.

fc417fc802 10 hours ago | parent [-]

Agreed about the conflation. But that drives home that there isn't some historic commonly and widely accepted definition for AGI whose goal posts are being moved. What there was doesn't match the new developments and was also often quite flawed to begin with.

> LLM models, ... outdo any current self-driving car

How would an LLM handle computer vision? Are you implicitly including a second embedding model there? But I think that's still the wrong sort of vision data for precise control, at least in general.

How do you propose to handle the model hallucinating? What about losing its train of thought?

docjay an hour ago | parent [-]

True that there isn’t a firm definition for AGI, but that’s the fault of the “I”. We don’t have an objective definition of intelligence, and so we don’t have a means of measuring it either. I mean, odds are you’re the least intelligent paleoethnobotanist and cetacean bioacoustician I’ve ever met, but perhaps the most intelligent something_else. How do we measure that? How do we define it?

I was confusing in my previous message. Right now it would be terrible at driving a car, but I was saying that has more to do with the physical interface (camera, sensors, etc) than the ability of an LLM. The ‘intelligence’ part is better than the PyTorch image recognition attached to a servo they’re using now, how to attach that ‘intelligence’ to the physical world is the 50 year task. (To be clear: LLMs aren’t intelligent, smart, or any sense of the word and never will be. But they can sure replicate the effect better than current self-driving tech.)