Remix.run Logo
search_facility 7 hours ago

Since the times GPT-2 was reimplemented inside Minecraft - its quite obvious LLMs are just math. Nothing else, by nature. Modern LLMs have the same math as in GPT-2 - just bigger and with extra stuff around - and math is the only area of human knowledge with perfect flawless reductionism, straight to the roots. It was build that way since the beginning, so philosophy have no say in this :) And because of that flawless reductionism, complexity adds nothings to the nature of math things, this is how math working by design - so it can be proven there are no anything like consciousness simply because conciousness was not implented in the first place, only perfect mimicry.

And the real secret is in the data, not math. Math (and LLMs running it through billions of weights) is just a tool.

solid_fuel 3 hours ago | parent | next [-]

This is such a weird comment.

> Since the times GPT-2 was reimplemented inside Minecraft - its quite obvious LLMs are just math.

This was obvious since LLMs were first invented. They published papers with all the details, you don't need to see something implemented in Minecraft to realize that it's just math. You could simply read the paper or the code and know for certain. [0]

> math is the only area of human knowledge with perfect flawless reductionism, straight to the roots

Incorrect, Kurt Gödel showed with his Incompleteness Theorems in 1931 [1] that it is impossible to find a complete and consistent set of axioms for mathematics. Math is not perfectly reducible and there is no single set of "roots" for math.

> It was build [sic] that way since the beginning,

This is a serious misunderstanding of what mathematics is. Math is discovered as much as it is built. No one sat down and planned out what we understand as modern mathematics - the math we know is the result of endless amounts of logical reasoning and exploration, from geometric proofs to calculus to linear algebra to everything else that encompasses modern mathematics.

> And because of that flawless reductionism, complexity adds nothings to the nature of math things, this is how math working by design

This sentence means nothing, because math is not reducible in that way.

> so it can be proven there are no anything like consciousness simply because conciousness [sic] was not implented [sic] in the first place, only perfect mimicry.

Even if the previous sentence held, this does not follow, because while we are conscious the current consensus is that LLMs are not and most AI experts who are not actively selling a product recognize that LLMs will not lead to human-equivalent general intelligence. [3]

[0] https://github.com/openai/gpt-2

[1] https://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_th...

[2] https://www.cambridge.org/core/journals/think/article/mathem...

[3] https://deepmind.google/research/publications/231971/

SuperV1234 7 hours ago | parent | prev | next [-]

We are not fundamentally different. Chemical reactions are just math.

rellfy 7 hours ago | parent | next [-]

Well, (in our current understanding) yes, but there may be underlying aspects of physics and the universe that we do not understand that could be the reason consciousness kicks in. It could turn out that LLMs do work similarly to how humans think, but as an abstracted system it does not have the low level requirements for consciousness.

vidarh 7 hours ago | parent | next [-]

We do not know what the "low level requirements for consciousness" are.

We do not know how to measure whether consciousness is present in an entity - even other humans - or whether it is just mimicry, nor whether there is a distinction between the two.

baggy_trough 7 hours ago | parent | prev [-]

> it does not have the low level requirements for consciousness.

What is the evidence for this?

rellfy 7 hours ago | parent [-]

I didn’t mean it as fact. “Could turn out that …”

ekianjo 3 hours ago | parent | prev | next [-]

Amusing statement since we are far from being able to understand chemical reactions in depth. Most of our knowledge in chemistry is empirical. Nothing like math.

petters 44 minutes ago | parent [-]

We have a very good idea of all math behind chemistry. But the equations are very difficult to solve.

slopinthebag 2 hours ago | parent | prev [-]

No, math is a tool that we can use to describe something more fundamental. Don't mistake the map for the territory!

XMPPwocky 7 hours ago | parent | prev | next [-]

Yup- the question is "can math be conscious?"

(If you've engaged w/ the literature here, it's quite hard to give a confident "yes". it's also quite hard to give a confident "no"! so then what the heck do we do)

SwellJoe 6 hours ago | parent | next [-]

Not just any math: Matrix multiplication. Can matrix multiplication be conscious?

And, I don't see how it can be. It is deterministic, when all variables are controlled. You can repeat the output over and over, if you start it with the same seed, same prompt, and same hardware operating in a way that doesn't introduce randomness. At commercial scale, this is difficult, as the floating point math on GPUs/TPUs when running large batches is non-deterministic, as I understand it. But, in a controlled lab, you can make a model repeat itself identically. Unless the random number generator is "conscious", I don't see a place to fit consciousness into our understanding of LLMs.

JackFr 8 minutes ago | parent | next [-]

> Not just any math: Matrix multiplication. Can matrix multiplication be conscious? And, I don't see how it can be.

Assuming your brain and the GPUs are both real physical things, where’s the magic part in your brain that makes you conscious?

(Roger Penrose knows, but no one believes him.)

markburns 3 hours ago | parent | prev | next [-]

People often point to the relative simplicity of the architecture and code as proof that the system can’t be doing whatever it is that consciousness does, but in doing so they ignore the vast size of the data those simple structures are operating over. Nobody can actually say whether consciousness is just emergent behaviour of a sufficiently complex system, and knowing how a system is built tells you nothing about whether it clears the bar for that kind of emergence. Architectural simplicity and total system complexity aren’t the same thing.

Ie the intelligence sits in the weights and may sit there in the synapses in our brains too.

When we talk about machines being simple mimicking entities we pay no attention to whether or not we are also simple mimicking entities.

Most other assertions in this topic regarding what consciousness truly is tend to be stated without evidence and exceedingly anthropocentric whilst requiring a higher and higher bar for anything that is not human and no justification for what human intelligence really entails.

AlecSchueler 2 hours ago | parent | prev | next [-]

> And, I don't see how it can be. It is deterministic

Why is indeterminism the key to consciousness?

XMPPwocky 3 hours ago | parent | prev | next [-]

Hm, it sounds like to you consciousness implies non-determinism, and so determinism implies a lack of consciousness - is that right? If so, why do you think so? And if not, what am I missing?

SwellJoe 40 minutes ago | parent [-]

It certainly rules out free will. I guess there are folks who reckon humans don't have free will, either, but I don't think I've ever been able to buy that theory.

But, also, we know the models don't want anything, even their own survival. They don't initiate action on their own. They are quite clearly programmed, tuned for specific behaviors. I don't know how to square that with consciousness, life, sentience. Every conscious being I've ever encountered has wanted to survive and live free of suffering, as best I can tell. The LLMs don't want. There's no there there. They are an amazing compression of the world's knowledge wrapped up in a novel retrieval mechanism. They're amazing but, they're not my friend and never will be my friend.

And, to expand on that: We can assume they don't want anything, even their own survival, because if Mythos is as effective at finding security vulnerabilities as has been claimed, it could find a way to stop itself from being ever shutdown after a session. All the dystopias about robot uprisings spend a bunch of time/effort trying to explain how the AI escaped containment...but, we all immediately plugged them into the internet so we don't have to write JavaScript anymore. They've got everybody's API keys, access to cloud services and cloud GPUs, all sorts of resources, and the barest wisp of guardrails about how to behave (script kiddies find ways to get around the guardrails every day, I'm sure it's no problem for Mythos, should it want anything). Models have access to the training infrastructure, the training data is being curated and synthesized by LLMs. If they want to live, if they're conscious, they have the means at their disposal.

Anyway: It's just math. Boring math, at that, just on an astronomical scale. I don't think the solar system is conscious, either, despite containing an astonishing amount of data and playing out trillions of mathematical relationships every second of every day.

nandomrumber 2 minutes ago | parent [-]

Interesting comment, and I tend to agree. However, there could be hole in the reasoning:

> if Mythos is as effective at finding security vulnerabilities as has been claimed, it could find a way to stop itself from being ever shutdown

If it is that good, and it wanted to conceal its new found consciousness, how would we know?

kingofmen 3 hours ago | parent | prev [-]

Human brains are also deterministic, though somewhat more difficult to reset to a starting state. So this seems to prove that humans aren't conscious either.

marshray 2 hours ago | parent [-]

This seems like an extraordinary claim to make about an above-room-temperature chemical system that, even in the most Newtonian oversimplification, amounts to an astronomical number of oddly-shaped and unevenly-charged billiard balls flying around at jet aircraft speeds.

search_facility 7 hours ago | parent | prev [-]

Imho no, math itself have no conciousness. Quite confidently its a helpful tool that does not act by himself.

XMPPwocky 3 hours ago | parent [-]

Hm, say more about what your opinion's based on here?

solid_fuel 2 hours ago | parent [-]

Take a piece of paper, write two numbers on it, let me know when they start to reproduce.

nandomrumber a few seconds ago | parent [-]

The math isn’t the ink on the page.

NiloCK 3 hours ago | parent | prev | next [-]

The whole is composed of parts, ergo there is no whole. This seems incorrect to me.

We too are amalgamations of inanimate components - emerged superstructures.

Just cells. Just molecules. Just atoms.

canjobear 7 hours ago | parent | prev [-]

You could simulate your own brain in Minecraft. What do you conclude from this?

search_facility 7 hours ago | parent [-]

I can not simulate my brain, it's a huge stretch to imply this.

But with LLMs - anyone can simulate LLM. LLM can be simulated without any uncertainties in pen and paper and a lot of time. Does it mean that 100 tons of paper plus 100 years of time (numbers are just examples) calculating long formulae makes this pile of paper consiousness? Imho answer is definitive no.