Remix.run Logo
agentultra an hour ago

> The brain is either a magical antenna channeling supernatural signals

There’s the classic thought-terminating cliche of the computational interpretation of consciousness.

If it isn’t computation, you must believe in magic!

Brains are way more fascinating and interesting than transistors, memory caches, and storage media.

WarmWash 33 minutes ago | parent [-]

You would probably be surprised to learn that computational theory has little to no talk of "transistors, memory caches, and storage media".

You could run Crysis on an abacus and render it on board of colored pegs if you had the patience for it.

It cannot be stressed enough that discovering computation (solving equations and making algorithms) is a different field than executing computation (building faster components and discovering new architectures).

agentultra 10 minutes ago | parent [-]

Not surprised at all.

My point is that it takes more hand-waving and magic belief to anthropomorphize LLM systems than it does to treat them as what they are.

You gain nothing from understanding them as if they were no different than people and philosophizing about whether a Turing machine can simulate a human brain. Fine for a science fiction novel that is asking us what it means to be a person or question the morals about how we treat people we see as different from ourselves. Not useful for understanding how an LLM works or what it does.

In fact, I say it’s harmful. Given the emerging studies on the cognitive decline of relying on LLMs to replace skill use and on the emerging psychosis being observed in people who really do believe that chat bots are a superior form of intelligence.

As for brains, it might be that the hardware is inextricably tied to what we observe as brains in nature.

We certainly don’t simulate climate models in computers can call it, “Earth,” and try to convince anyone that we’re about to create parallel dimensions.

I don’t read Church’s paper on Lambda Calculus and get the belief that we could simulate all life from it. Nor Turing’s machine.

I guess I’m just not easily awed by LLMs and neural networks. We know that they can approximate any function given an unbounded network within some epsilon. But if you restate the theorem formally it loses much of its power to convince anyone that this means we could simulate any function. Some useful ones, sure, and we know that we can optimize computation to perform particular tasks but we also know what those limits are and for most functions, I imagine, we simply do not have enough atoms in the universe to approximate them.

LLMs and NNs and all of these things are neat tools. But there’s no explanatory power gained by fooling ourselves into treating them like they are people, could be people, or behave like people. It’s a system comprised of data and algorithms to perform a particular task. Understanding it this way makes it easier, in my experience, to understand the outputs they generate.