Remix.run Logo
kenjackson 3 days ago

"vision had decades of head start, yet LLMs leapfrogged it in just a few years."

From an evolutionary perspective though vision had millions of years head start over written language. Additionally, almost all animals have quite good vision mechanisms, but very few do any written communication. Behaviors that map to intelligence don't emerge concurrently. It may well be there are different forms of signals/sensors/mechanical skills that contribute to emergence of different intelligences.

It really feels more and more like we should recast AGI as Artificial Human Intelligence Likeness (AHIL).

adamzwasserman 3 days ago | parent | next [-]

From a terminology point of view, I absolutely agree. Human-likeness is what most people mean when they talk about AGI. Calling it what it is would clarify a lot of the discussions around it.

However I am clear that I do not believe that this will ever happen, and I see no evidence to convince that that there is even a possibility that it will.

I think that Wittgenstein had it right when he said: "If a lion could speak, we could not understand him."

andoando 3 days ago | parent [-]

>I think that Wittgenstein had it right when he said: "If a lion could speak, we could not understand him."

Why would we not? We live in the same physical world and encounter the same problems.

adamzwasserman 3 days ago | parent | next [-]

You're actually proving Wittgenstein's point. We share the same physical world, but we don't encounter the same problems. A lion's concerns - territory, hunting, pride hierarchy - are fundamentally different from ours: mortgages, meaning, relationships.

And here's the kicker: you don't even fully understand me, and I'm human. What makes you think you'd understand a lion?

beeflet 3 days ago | parent | next [-]

Humans also have territory, hunting and hierarchy. Everything that a lion does, humans also do but more complicated. So I think we would be able to understand the new creature.

But the problem is really that the lion that speaks is not the same creature as the lion we know. Everything the lion we know wants to say can already be said through its body language or current faculties. The goldfish grows to the size of its container.

adamzwasserman 3 days ago | parent [-]

You've completely missed Wittgenstein's point. It's not about whether lions and humans share some behaviors - it's about whether they share the form of life that grounds linguistic meaning.

zeroonetwothree 3 days ago | parent [-]

I think humans would be intelligent enough to understand the lion's linguistic meaning (after some training). Probably not the other way around. But it's a speculative argument, there's no real evidence one way or another.

andoando 3 days ago | parent | prev [-]

Thats only a minor subset of our thoughts. If you were going hiking what kind of thoughts would you have? "There are trees there", "Its raining I should get cover", "I can hide in the bushes", "Im not sure if I cna climb over this or not". "There is x on the left and y on the right", "the wind went away" etc etc etc etc.

The origins of human language were no doubt communicating such simple thoughts and not about your deep inner psyche and the complexities of the 21st century.

There's actually quite a bit of evidence that all language, even complex words, are rooted in spatial relationships.

adamzwasserman 2 days ago | parent [-]

You're describing perception, not the lived experience that gives those perceptions meaning. Yes, a lion sees trees and rain. But a lion doesn't have 'hiking', it has territory patrol. It doesn't 'hide in bushes', it stalks prey. These aren't just different words for the same thing; they're fundamentally different frameworks for interpreting raw sensory data. That's Wittgenstein's point about form of life.

andoando 2 days ago | parent [-]

Why do you assume they're fundamentally different frameworks? Just because wittgenstein said it?

goatlover 3 days ago | parent | prev [-]

We haven't been able to decode what whales and dolphins are communicating. Are they using language? A problem SETI faces is whether we would be able to decode an alien signal. They may be too different in their biology, culture and technology. The book & movie Contact propose that math is a universal language. This assumes they're motivated to use the same basic mathematical structures we do. Maybe they don't care about prime numbers.

Solaris by Stanislaw Lem explores an alien ocean that so different humans utterly fail to communicate with it, leading to the ocean creating humans from memories in brain scans broadcast over the ocean, but it's never understood why the ocean did this. The recreated humans don't know either.

adamzwasserman 3 days ago | parent [-]

The whole "math is a universal" language is particularly laughable to me considering it is a formal system and the universe is observably irregular.

As I am wont to say: regularity is only ever achieved at the price of generality.

zeroonetwothree 3 days ago | parent | next [-]

Many mathematical structures are 'irregular'. That's not a very strong argument against math as a universal descriptor.

adamzwasserman 2 days ago | parent [-]

see reply above

andoando 2 days ago | parent | prev [-]

Think about what math is trying to formalize

adamzwasserman 2 days ago | parent [-]

Math formalizes regularities by abstracting away irregularities - that's precisely my point. Any formal system achieves its regularity by limiting its scope. Math can describe aspects of reality with precision, but it cannot capture reality's full complexity. A 'universal language' that can only express what fits into formal systems isn't universal at all: it's a specialized tool that works within constrained domains.

Retric 3 days ago | parent | prev [-]

This is all really arbitrary metrics across such wildly different fields. IMO LLMs are where computer vision was 20+ years ago in terms of real world accuracy. Other people feel LLMs offer far more value to the economy etc.

adamzwasserman 3 days ago | parent [-]

I understand the temptation to compare LLMs and computer vision, but I think it’s misleading to equate generative AI with feature-identification or descriptive AI systems like those in early computer vision. LLMs, which focus on generating human-like text and reasoning across diverse contexts, operate in a fundamentally different domain than descriptive AI, which primarily extracts patterns or features from data, like early vision systems did for images.

Comparing their 'real-world accuracy' oversimplifies their distinct goals and applications. While LLMs drive economic value through versatility in language tasks, their maturity shouldn’t be measured against the same metrics as descriptive systems from decades ago.

Retric 3 days ago | parent [-]

I don’t think it’s an oversimplification as accuracy is what constrains LLMs across so many domains. If you’re a wealthy person asking ChatGPT to write a prenup or other contract to use would be an act of stupidity unless you vetted it with an actual lawyer. My most desired use case is closer, but LLMs are still more than an order of magnitude below what I am willing to tolerate.

IMO that’s what maturity means in AI systems. Self driving cars aren’t limited by the underlying mechanical complexity, it’s all about the long quest for a system to make reasonably correct decisions hundreds of times a second for years across widely varying regions and weather conditions. Individual cruse missiles on the other hand only needed to operate across a single short and pre-mapped flight in specific conditions, therefore they used visual navigation decades earlier.

adamzwasserman 2 days ago | parent [-]

You're conflating two different questions. I'm not arguing LLMs are mature or reliable enough for high-stakes tasks. My argument is about why they produce output that creates the illusion of understanding in the language domain, while the same techniques applied to other domains (video generation, molecular modeling, etc.) don't produce anything resembling 'understanding' despite comparable or greater effort.

The accuracy problems you're describing actually support my point: LLMs navigate linguistic structures effectively enough to fool people into thinking they understand, but they can't verify their outputs against reality. That's exactly what you'd expect from a system that only has access to the map (language) and not the territory (reality).

Retric 2 days ago | parent [-]

I’m not saying these tasks are high stakes so much as they inherently require high levels of accuracy. Programmers can improve code so the accuracy threshold for utility is way lower when someone is testing before deployment. That difference exists based on how you’re trying to use it independent of how critical the code actually is.

The degree to which LLMs successfully fake understanding depends heavily on how much accuracy you’re looking for. I’ve judged their output as gibberish on a task someone else felt it did quite well. If anything they make it clear how many people just operate on vague associations without any actual understanding of what’s going on.

In terms of map vs territory, LLMs get trained on a host of conflicting information but they don’t synthesize that into uncertainty. Ask one what the average distance between the earth and the moon and you’ll get a number because the form of the response in training data is always a number, look at several websites and you’ll see a bunch of different numbers literally thousands of miles apart which seems odd as we know the actual distance at any moment to well within an inch. Anyway, the inherent method of training is simply incapable of that kind of analysis.

  The average lunar distance is approximately 385,000 km https://en.wikipedia.org/wiki/Lunar_distance
  The average distance between the Earth and the Moon is 384 400 km (238 855 miles). https://www.rmg.co.uk/stories/space-astronomy/how-far-away-moon
  The Moon is approximately 384,000 km (238,600 miles) away from Earth, on average. https://www.britannica.com/science/How-Far-Is-the-Moon-From-Earth
  The Moon is an average of 238,855 miles (384,400 km) away. https://spaceplace.nasa.gov/moon-distance/en/
  The average distance to the Moon is 382,500 km
  https://nasaeclips.arc.nasa.gov/shared_assets/resources/distance-to-the-moon/438170main_GLDistancetotheMoon.pdf