Remix.run Logo
vidarh 2 days ago

> However, LLMs will not be able to represent ideas that it has not encountered before. It won't be able to come up with truly novel concepts, or even ask questions about them. Humans (some at least) have that unbounded creativity that LLMs do not.

There's absolutely no evidence to support this claim. It'd require humans to exceed the Turing computable, and we have no evidence that is possible.

koliber 2 days ago | parent | next [-]

If you tell me that trees are big, and trees are made of hard wood, I as a human am capable of asking whether trees feel pain. I don't think what you said is false and I am not familiar with computational theory to be able to debate it. People occasionally have novel creative insights that do not derive from past experience or knowledge, and that is what I think of when I think of creativity.

Humans created novel concepts like writing literally out of thin air. I like how the book "Guns, Steels, and Germs" describes that novel creative process and contrasts it via a disseminative derivation process.

vidarh 2 days ago | parent | next [-]

> People occasionally have novel creative insights that do not derive from past experience or knowledge, and that is what I think of when I think of creativity.

If they are not derived from past experience or knowledge, then unless humans exceed the Turing computable, they would need to be the result of randomness in one form or other. There's absolutely no reason why an LLM can not do that. The only reason a far "dumber" pure random number generator based string generator "can't" do that is because it would take too long to chance on something coherent, but it most certainly would keep spitting out novel things. The only difference is how coherent the novel things are.

Jensson a day ago | parent [-]

> If they are not derived from past experience or knowledge

Every animal is born with intuition, you missed that part.

vidarh a day ago | parent [-]

So knowledge encoded in the physical structure of the brain.

You're missing the part where unless there is unknown physics going on in the brain that breaks maths as me know it, there is no mechanism for a brain to exceed the Turing computable, in which case any Turing complete system is comptationally equivalent to it.

a day ago | parent | next [-]
[deleted]
arowthway a day ago | parent | prev | next [-]

Turing machines are deterministic, brain might not be because of quantum mechanics happening. Of course there is no proof that this is related to creativity.

vidarh a day ago | parent [-]

Turing machines are deterministic if all their inputs are deterministic, which they do not need to be, and if we allow them to be. Indeed, by default, LLMs are by default not deterministic because we intentionally inject randomness.

arowthway a day ago | parent [-]

It doesn't mean we can accurately simulate the brain by swapping its source of nondeterminism with any other PRNG or TRNG. It might just so happen that to simulate ingenuity you have to simulate the universe first.

johnisgood a day ago | parent | prev [-]

This Turing completeness equivalence is misleading. While all Turing-complete systems can theoretically compute the same class of functions, this says nothing about computational complexity, physical constraints, practical achievability in finite time, or the actual algorithms required. A Turing machine that can theoretically simulate a brain does not mean we know how to do it or that it is even feasible. This is like arguing that because weather systems and computers both follow physical laws, you should be able to perfectly simulate weather on your laptop.

Additionally, "No mechanism to exceed Turing computable" is a non-sequitur. Even granting that brains do not perform hypercomputation, this does not support your conclusion that artificial systems are "computationally equivalent" to brains in any practical sense. We would need: (1) complete understanding of brain algorithms, (2) the actual data/weights encoded in neural structures, (3) sufficient computational resources, and (4) correct implementation. None of these follow from Turing completeness alone, I believe.

More importantly, you completely dodged the actual point about intuition. Jensson's point is about evolutionary encoding vs. learned knowledge. Intuition represents millions of years of evolved optimization encoded in brain structure and chemistry. You acknowledge this ("knowledge encoded in physical structure") but then pivot to an irrelevant theoretical CS argument rather than addressing whether we can actually replicate such evolutionary knowledge in artificial systems.

Your original claim was "If they are not derived from past experience or knowledge" which creates a false dichotomy. Animals are born with innate knowledge encoded through evolutionary optimization. This is not learned from individual experience, yet it is still knowledge, specifically, it is millions of years of selection pressure encoded in neural architecture, reflexes, instincts, and cognitive biases.

So, for example: a newborn animal has never experienced a predator but knows to freeze or flee from certain stimuli. It has built-in heuristics for threat assessment, social behavior, spatial reasoning, and countless other domains that cost generations to develop through survival pressure.

Current AI systems lack this evolutionary substrate. They are trained on human data over weeks or months, not evolved over millions of years. We do not even know how to encode this type of knowledge artificially or even fully understand what knowledge is encoded in biological systems. Turing completeness does not bridge this gap any more than it bridges the gap between a Turing machine and actual weather.

Correct me if I'm misinterpreting your argument.

alansammarone 14 hours ago | parent [-]

I...I am very interested in this subject. There's a lot to unpack in your comment, but I think it's really pretty simple.

> this does not support your conclusion that artificial systems are "computationally equivalent" to brains in any practical sense.

You're making a point about engineering or practicality, and in that sense, you are absolutely correct.

That's not the most interesting part of the question, however.

> This is like arguing that because weather systems and computers both follow physical laws, you should be able to perfectly simulate weather on your laptop.

Yes, that's exactly what I'd argue, and...hm.. yes, I think that's clearly true. Whether it takes 10 minutes or 10^100 minutes, 1~ or 10^100 human lifetimes to do so, it's irrelevant. Units (including human lifetimes) are arbitrary, and I think fundamental truths probably won't depend on such arbitrary things as how long a particular collection of atoms in a particular corner of the universe (i.e. humans) happens to be stable for. Ratios are closer to being fundamental, but I digress.

To put it a different way - we think we know what the speed of light is. Traveling at v = 0.1c or at v = (1 - 10^(-100))c are equivalent in a fundamental sense, it's an engineering problem. Now, traveling at v = c...that's very different. That's interesting.

c22 a day ago | parent | prev [-]

Wouldn't this insight derive from many past experiences of feeling pain yourself and the knowledge that others feel it too?

somenameforme a day ago | parent | prev | next [-]

Turing computability is tangential to his claim, as LLMs are obviously not carrying out the breadth of all computable concepts. His claim can be trivially proven by considering the history of humanity. We went from a starting point of having literally no language whatsoever, and technology that would not have expanded much beyond an understanding of 'poke him with the pointy side'. And from there we would go on to discover the secrets of the atom, put a man on the Moon, and more. To say nothing of inventing language itself.

An LLM trained on this starting state of humanity is never going to do anything except remix basically nothing. It's never going to discover the secrets of the atom, or how to put a man on the Moon. Now whether any artificial device could achieve what humans did is where the question of computability comes into play, and that's a much more interesting one. But if we limit ourselves to LLMs, then this is very straight forward to answer.

vidarh a day ago | parent [-]

> Turing computability is tangential to his claim, as LLMs are obviously not carrying out the breadth of all computable concepts

They don't need to. To be Turing complete a system including an LLM need to be able to simulate a 2-state 3-symbol Turing machine (or the inverse). Any LLM with a loop can satisfy that.

If you think Turing computability is tangential to this claim, you don't understand the implications of Turing computability.

> His claim can be trivially proven by considering the history of humanity.

Then show me a single example where humans demonstrably exceeding the Turing computable.

We don't even know any way for that to be possible.

somenameforme a day ago | parent [-]

This is akin to claiming that a tic-tac-toe game is turing complete since after all we could simply just modify it to make it not a tic tac toe game. It's not exactly a clever argument.

And again there are endless things that seem to reasonably defy turing computability except when you assume your own conclusion. Going from nothing, not even language, to richly communicating, inventing things with no logical basis for such, and so is difficult to even conceive as a computable process unless again you simply assume that it must be computable. For a more common example that rapidly enters into the domain of philosophy - there is the nature of consciousness.

It's impossible to prove that such is Turing computable because you can't even prove consciousness exists. The only way I know it exists is because I'm most certainly conscious, and I assume you are too, but you can never prove that to me, anymore than I could ever prove I'm conscious to you. And so now we enter into the domain of trying to computationally imagine something which you can't even prove exists, it's all just a complete nonstarter.

-----

I'd also add here that I think the current consensus among those in AI is implicit agreement with this issue. If we genuinely wanted AGI it would make vastly more sense to start from as little as possible because it'd ostensibly reduce computational and other requirements by many orders of magnitude, and we could likely also help create a more controllable and less biased model by starting from a bare minimum of first principles. And there's potentially trillions of dollars for anybody that could achieve this. Instead, we get everything dumped into token prediction algorithms which are inherently limited in potential.

vidarh a day ago | parent [-]

> This is akin to claiming that a tic-tac-toe game is turing complete since after all we could simply just modify it to make it not a tic tac toe game. It's not exactly a clever argument.

No, it is nowhere remotely like that. It is claiming that a machine capable of running a Turing machine is in fact capable of running any other Turing machine. In other words, it is pointing out the principle of Turing equivalence.

> And again there are endless things that seem to reasonably defy turing computability

Show us one. We have no evidence of any single one.

> It's impossible to prove that such is Turing computable because you can't even prove consciousness exists.

Unless you can show that humans exceeds the Turing computable, "consciousness" however you define it is either possible purely with a Turing complete system or can not affect the outputs of such a system. In either case this argument is irrelevant unless you can show evidence we exceed the Turing computable.

> I'd also add here that I think the current consensus among those in AI is implicit agreement with this issue. If we genuinely wanted AGI it would make vastly more sense to start from as little as possible because it'd ostensibly reduce computational and other requirements by many orders of magnitude, and we could likely also help create a more controllable and less biased model by starting from a bare minimum of first principles. And there's potentially trillions of dollars for anybody that could achieve this. Instead, we get everything dumped into token prediction algorithms which are inherently limited in potential.

This is fundamentally failing to engage with the argument. There is nothing in the argument that tells us anything about the complexity of a solution to AGI.

somenameforme a day ago | parent [-]

LLMs are not capable of simulating turing machines - their output is inherently and inescapably probabilistic. You would need to fundamentally rewrite one to make this possible, at which point it is no longer an LLM.

And as I stated, you are assuming your own conclusion to debate the issue. You believe that nothing is incomputable, and are tying that assumption into your argument as an assumption. It's not on me to prove your assumption is wrong, it's on you to prove that it's correct - proving a negative is impossible. E.g. - I'm going to assume that there is an invisible green massless goblin on your shoulder named Kyzirgurankl. Prove me wrong. Can you give me even the slightest bit of evidence against it? Of course you cannot, yet absence of evidence is not evidence of absence, so the burden of my claim rests on me.

And so now feel free to prove that consciousness is computable, or even replicating humanity's successes from a comparable baseline. Without that proof you must understand that you're not making some falsifiable claim of fact, but simply appealing to your own personal ideology or philosophy, which is of course completely fine (and even a good thing), but also a completely subjective opinion on matters.

johnisgood 19 hours ago | parent [-]

After having read your comment, I feel I should have left my comment under this thread. I will just refer to it instead: https://news.ycombinator.com/item?id=46003870. This was my reply to your parent. I agree with you.

Fargren a day ago | parent | prev [-]

You are making a big assumption here, which is that LLMs are the main "algorithm" that the human brain uses. The human brain can easily be a Turing machine, that's "running" something that's not an LLM. If that's the case, we can say that the fact that humans can come up with novel concept does not imply that LLMs can do the same.

vidarh a day ago | parent [-]

No, I am not assuming anything about the structure of the human brain.

The point of talking about Turing completeness is that any universal Turing machine can emulate any other (Turing equivalence). This is fundamental to the theory of computation.

And since we can easily show that both can be rigged up in ways that makes the system Turing complete, for humans to be "special", we would need to be able to be more than Turing complete.

There is no evidence to suggest we are, and no evidence to suggest that is even possible.

Fargren a day ago | parent [-]

An LLM is not a universal Turing machine, though. It's a specific family of algorithms.

You can't build an LLM that will factorize arbitrarily large numbers, even in infinite time. But a Turing machine can.

vidarh a day ago | parent [-]

To make a universal Turing machine out of an LLM only requires a loop and the ability to make a model that will look up a 2x3 matrix of operations based on context and output operations to the context on the basis of them (the smallest Turing machine has 2 states and 3 symbols or the inverse).

So, yes, you can.

Once you have a (2,3) Turing machine, you can from that build a model that models any larger Turing machine - it's just a question of allowing it enough computation and enough layers.

It is not guaranteed that any specific architecture can do it efficiently, but that is entirely besides the point.

Fargren a day ago | parent | next [-]

LLMs cannot loop (unless you have a counterexample?), and I'm not even sure they can do a lookup in a table with 100% reliability. They also have finite context, while a Turing machine can have infinite state.

johnisgood a day ago | parent | prev [-]

Are you saying that LLMs are Turing complete or did I misunderstand it?