Remix.run Logo
ajross a day ago

> Knowing which parts-of-speech about sunrises appear together and where is not the same as understanding a sunrise

What does "understanding a sunrise" mean though? Arguments like this end up resting on semantics or tautology, 100% of the time. Arguments of the form "what AI is really doing" likewise fail because we don't know what real brains are "really" doing either.

I mean, if we knew how to model human language/reasoning/whatever we'd just do that. We don't, and we can't. The AI boosters are betting that whatever it is (that we don't understand!) is an emergent property of enough compute power and that all we need to do is keep cranking the data center construction engine. The AI pessimists, you among them, are mostly just arguing from ludditism: "this can't possibly work because I don't understand how it can".

Who the hell knows, basically. We're at an interesting moment where technology and the theory behind it are hitting the wall at the same time. That's really rare[1], generally you know how something works and applying it just a question of figuring out how to build a machine.

[1] Another example might be some of the chemistry fumbling going on at the start of the industrial revolution. We knew how to smelt and cast metals at crazy scales well before we knew what was actually happening. Stuff like that.

subjectivationx 16 hours ago | parent | next [-]

Everyone reading this understands the meaning of a sunrise. It is a wonderful example of the use theory of meaning.

If you raised a baby inside a windowless solitary confinement cell for 20 years and then one day show them the sunrise on a video monitor, they still don't understand the meaning of a sunrise.

Trying to extract the meaning of a sunrise by a machine from the syntax of a sunrise data corpus is just totally absurd.

You could extract some statistical regularity from the pixel data of the sunrise video monitor or sunrise data corpus. That model may provide some useful results that can then be used in the lived world.

Pretending the model understands a sunrise though is just nonsense.

Showing the sunrise statistical model has some use in the lived world as proof the model understands a sunrise I would say borders on intellectual fraud considering a human doing the same thing wouldn't understand a sunrise either.

ajross 15 hours ago | parent [-]

> Everyone reading this understands the meaning of a sunrise

For a definition of "understands" that resists rigor and repeatability, sure. This is what I meant by reducing it to a semantic argument. You're just saying that AI is impossible. That doesn't constitute evidence for your position. Your opponents in the argument who feel AGI is imminent are likewise just handwaving.

To wit: none of you people have any idea what you're talking about. No one does. So take off the high hat and stop pretending you do.

meroes 13 hours ago | parent [-]

This all just boils down to the Chinese Room thought experiment, where Im pretty sure the consensus is nothing in the experiment (not the person inside, the whole emergent room, etc) understands Chinese like us.

Another example by Searle is a computer simulating digestion is not digesting like a stomach.

The people saying AI can’t form from LLMs are in the consensus side of the Chinese Room. The digestion simulator could tell us where every single atom is of a stomach digesting a meal, and it’s still not digestion. Only once the computer simulation breaks down food particles chemically and physically is it digestion. Only once an LLM received photons or has a physical capacity to receive photons is there anything like “seeing a night sky”.

a day ago | parent | prev | next [-]
[deleted]
pastel8739 a day ago | parent | prev [-]

Is it really so rare? I feel like I know of tons of fields where we have methods that work empirically but don’t understand all the theory. I’d actually argue that we don’t know what’s “actually” happening _ever_, but only have built enough understanding to do useful things.

ajross a day ago | parent [-]

I mean, most big changes in the tech base don't have that characteristic. Semiconductors require only 1920's physics to describe (and a ton of experimentation to figure out how to manufacture). The motor revolution of the early 1900's was all built on well-settled thermodynamics (chemistry lagged a bit, but you don't need a lot of chemical theory to burn stuff). Maxwell's electrodynamics explained all of industrial electrification but predated it by 50 years, etc...

skydhash a day ago | parent [-]

Those big changes always happens because someone presented a simpler model that explains stuff enough we can build stuff on it. It's not like semiconductors raw materials wasn't around.

The technologies around LLMs is fairly simple. What is not is the actual size of data being ingested and the number of resulting factors (weight). We have a formula and the parameters to generate grammatically perfect text, but to obtain it, you need TBs of data to get GBs of numbers.

In contrast something like TM or Church's notation is pure genius. Less than a 100 pages of theorems that are one of the main pillars of the tech world.

ajross 15 hours ago | parent [-]

> Those big changes always happens because someone presented a simpler model that explains stuff enough we can build stuff on it.

Again, no it doesn't. It didn't with industrial steelmaking, which was ad hoc and lucky. It isn't with AI, which no one actually understands.

skydhash 10 hours ago | parent [-]

I’m pretty sure there were always formula for getting high quality steel even before the industrial age. And you only need a few textbooks and papers to understand AI.