Remix.run Logo
agentultra an hour ago

This is the worst possible take. It dismisses an entire branch of science that has been studying neurology for decades. Biological brains exist, we study them, and no they are not like computers at all.

There have been charlatans repeating this idea of a “computational interpretation,” of biological processes since at least the 60s and it needs to be known that it was bunk then and continues to be bunk.

Update: There's no need for Chinese Room thought experiments. The outcome isn't what defines sentience, personhood, intelligence, etc. An algorithm is an algorithm. A computer is a computer. These things matter.

coldtea an hour ago | parent | next [-]

>This is the worst possible take. It dismisses an entire branch of science that has been studying neurology for decades. Biological brains exist, we study them, and no they are not like computers at all.

They're not like computers in a superficial way that doesn't matter.

They're still computational apparatus, and have a not that dissimilar (if way more advanced) architecture.

Same as 0 and 1s aren't vibrating air molecules. They can still encode sound however just fine.

>Update: There's no need for Chinese Room thought experiments. The outcome isn't what defines sentience, personhood, intelligence, etc. An algorithm is an algorithm. A computer is a computer. These things matter.

Not begging the question matters even more.

This is just handwaving and begging the question. 'An algorithm is an algorithm' means nothing. Who said what the brain does can't be described by an algorithm?

WarmWash 29 minutes ago | parent | prev | next [-]

>Biological brains exist, we study them, and no they are not like computers at all.

You are confusing the way computation is done (neuroscience) with whether or not computation is being done (transforming inputs into outputs).

The brain is either a magical antenna channeling supernatural signals from higher planes, or it's doing computation.

I'm not aware of any neuroscientists in the former camp.

agentultra a minute ago | parent [-]

> The brain is either a magical antenna channeling supernatural signals

There’s the classic thought-terminating cliche of the computational interpretation of consciousness.

If it isn’t computation, you must believe in magic!

Brains are way more fascinating and interesting than transistors, memory caches, and storage media.

tux1968 40 minutes ago | parent | prev | next [-]

> An algorithm is an algorithm. A computer is a computer. These things matter.

Sure. But we're allowed to notice abstractions that are similar between these things. Unless you believe that logic and "thinking" are somehow magic, and thus beyond the realm of computation, then there's no reason to think they're restricted to humanity.

It is human ego and hubris that keeps demanding we're special and could never be fully emulated in silicon. It's the exact same reasoning that put the earth at the center of the universe, and humans as the primary focus of God's will.

That said, nobody is confused that LLM's are the intellectual equal of humans today. They're more powerful in some ways, and tremendously weaker in other ways. But pointing those differences out, is not a logical argument in proving their ultimate abilities.

Octoth0rpe a minute ago | parent [-]

> Unless you believe that logic and "thinking" are somehow magic, and thus beyond the realm of computation

Worth noting that significant majority of the US population (though not necessarily developers) does in fact believe that, or at least belongs to a religious group for which that belief is commonly promulgated.

cshores an hour ago | parent | prev [-]

Worth separating “the algorithm” from “the trained model.” Humans write the architecture + training loop (the recipe), but most of the actual capability ends up in the learned weights after training on a ton of data.

Inference is mostly matrix math + a few standard ops, and the behavior isn’t hand-coded rule-by-rule. The “algorithm” part is more like instincts in animals: it sets up the learning dynamics and some biases, but it doesn’t get you very far without what’s learned from experience/data.

Also, most “knowledge” comes from pretraining; RL-style fine-tuning mostly nudges behavior (helpfulness/safety/preferences) rather than creating the base capabilities.