Remix.run Logo
measurablefunc 4 hours ago

We know exactly what is going on inside the box. The problem isn't knowing what is going on inside the box, the problem is that it's all binary arithmetic & no human being evolved to make sense of binary arithmetic so it seems like magic to you when in reality it's nothing more than a circuit w/ billions of logic gates.

famouswaffles 4 hours ago | parent [-]

We do not know or understand even a tiny fraction of the algorithms and processes a Large Language Model employs to answer any given question. We simply don't. Ironically, only the people who understand things the least think we do.

Your comment about 'binary arithmetic' and 'billions of logic gates' is just nonsense.

measurablefunc 4 hours ago | parent [-]

Not even wrong: https://claude.ai/public/artifacts/b649c8ca-7907-4597-a4ee-0...

camgunz 3 hours ago | parent [-]

"Look man all reality is just uncountable numbers of subparticles phasing in and out of existence, what's not to understand?"

measurablefunc 2 hours ago | parent [-]

Your response is a common enough fallacy to have a name: straw man.

stickfigure 2 hours ago | parent [-]

I think the fallacy at hand is more along the lines of "no true scotsman".

You can define understanding to require such detail that nobody can claim it; you can define understanding to be so trivial that everyone can claim it.

"Why does the sun rise?" Is it enough to understand that the Earth revolves around the sun, or do you need to understand quantum gravity?

measurablefunc an hour ago | parent [-]

Good point. OP was saying "no one knows" when in fact plenty of people do know but people also often conflate knowing & understanding w/o realizing that's what they're doing. People who have studied programming, electrical engineering, ultraviolet lithography, quantum mechanics, & so on know what is going on inside the computer but that's different from saying they understand billions of transistors b/c no one really understands billions of transistors even though a single transistor is understood well enough to be manufactured in large enough quantities that almost anyone who wants to can have the equivalent of a supercomputer in their pocket for less than $1k: https://www.youtube.com/watch?v=MiUHjLxm3V0.

Somewhere along the way from one transistor to a few billion human understanding stops but we still know how it was all assembled together to perform boolean arithmetic operations.

famouswaffles 27 minutes ago | parent [-]

Honestly, you are just confused.

With LLMs, The "knowing" you're describing is trivial and doesn't really constitute knowing at all. It's just the physics of the substrate. When people say LLMs are a black box, they aren't talking about the hardware or the fact that it's "math all the way down." They are talking about interpretability.

If I hand you a 175-billion parameter tensor, your 'knowledge' of logic gates doesn't help you explain why a specific circuit within that model represents "the concept of justice" or how it decided to pivot a sentence in a specific direction.

On the other hand, the very professions you cited rely on interpretability. A civil engineer doesn't look at a bridge and dismiss it as "a collection of atoms" unable to go further. They can point to a specific truss and explain exactly how it manages tension and compression, tell you why it could collapse in certain conditions. A software engineer can step through a debugger and tell you why a specific if statement triggered.

We don't even have that much for LLMs so why would you say we have an idea of what's going on ?

measurablefunc 21 minutes ago | parent [-]

No one relies on "interpretability" in quantum mechanics. It is famously uninterpretable. In any case, I don't think any further engagement is going to be productive for anyone here so I'm dropping out of this thread. Good luck.

famouswaffles 8 minutes ago | parent [-]

Quantum mechanics has competing interpretations (Copenhagen, Many-Worlds, etc.) about what the math means philosophically, but we still have precise mathematical models that let us predict outcomes and engineer devices.

Again, we lack even this much with LLMs so why say we know how they work ?