Remix.run Logo
agentultra an hour ago

Not surprised at all.

My point is that it takes more hand-waving and magic belief to anthropomorphize LLM systems than it does to treat them as what they are.

You gain nothing from understanding them as if they were no different than people and philosophizing about whether a Turing machine can simulate a human brain. Fine for a science fiction novel that is asking us what it means to be a person or question the morals about how we treat people we see as different from ourselves. Not useful for understanding how an LLM works or what it does.

In fact, I say it’s harmful. Given the emerging studies on the cognitive decline of relying on LLMs to replace skill use and on the emerging psychosis being observed in people who really do believe that chat bots are a superior form of intelligence.

As for brains, it might be that what we observe as “reasoning” and “intelligence” and “consciousness” is tied to the hardware, so to speak. Certainly what we’ve observed in the behaviour of bees and corvids have had a more dramatic effect on our understanding of these things than arguing about whether a Turing machine locked in a room could pass as human.

We certainly don’t simulate climate models in computers can call it, “Earth,” and try to convince anyone that we’re about to create parallel dimensions.

I don’t read Church’s paper on Lambda Calculus and get the belief that we could simulate all life from it. Nor Turing’s machine.

I guess I’m just not easily awed by LLMs and neural networks. We know that they can approximate any function given an unbounded network within some epsilon. But if you restate the theorem formally it loses much of its power to convince anyone that this means we could simulate any function. Some useful ones, sure, and we know that we can optimize computation to perform particular tasks but we also know what those limits are and for most functions, I imagine, we simply do not have enough atoms in the universe to approximate them.

LLMs and NNs and all of these things are neat tools. But there’s no explanatory power gained by fooling ourselves into treating them like they are people, could be people, or behave like people. It’s a system comprised of data and algorithms to perform a particular task. Understanding it this way makes it easier, in my experience, to understand the outputs they generate.

Kim_Bruning an hour ago | parent | next [-]

> philosophizing about whether a Turing machine can simulate a human brain

Existence proof:

  * DNA transcription  (a Turing machine, as per (Turing 1936) )
  * Leads to Alan Turing by means of morphogenisis (Turing 1952)
  * Alan Turing has a brain that writes the two papers
  * Thus proving he is at least a turing machine (by writing Turing 1936)
  * And capable of simulating chemical processes (by writing Turing 1952)
Turing 1936: https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf

Turing 1952: https://www.dna.caltech.edu/courses/cs191/paperscs191/turing...

WarmWash an hour ago | parent | prev [-]

I don't see where I mentioned LLMs or what they have to do with a discussion about compute substrates.

My point is that it is incredibly unlikely the brain has any kind of monopoly on the algorithms it executes. Contrary to your point, a brain is in fact a computer.

staticman2 14 minutes ago | parent [-]

> Contrary to your point, a brain is in fact a computer.

Whether a brain is a computer is entirely resolved by your definition of computer. And being definitional in nature, this assertion is banal.