Remix.run Logo
superconduct123 2 days ago

I always get a weird feeling when AI researchers and CS people start talking about comparisons between human brains and AI/computers

Why is there a presumption that we (as people who have only studied CS) know enough about biology/neuroscience/evolution to make these comparisons/parallels/analogies?

I enjoy the discussions but I always get the thought in the back of my head "...remember you're listening to 2 CS majors talk about neuroscience"

empiko 2 days ago | parent | next [-]

We should completely strip all this talk from AI as a field (and get rid of that name as well). It just causes endless confusion, especially for general audience. In the end, the whole shtick with LLMs is that we train matrices to predict next tokens. You can explain this entire concept without invoking AGI, Roko's basilisk, the nature of human consciousness, and all the other mumbo jumbo that tries so hard to make this field what it is not.

giardini 10 hours ago | parent | next [-]

I eagerly await your publications. I will buy your book.

scotty79 a day ago | parent | prev [-]

But people love misguided narratives and analogies. How else should we kill time when we are to dumb to accelerate inevitable progress and just need to wait for it?

__loam a day ago | parent [-]

How else do we invite ludicrous amounts of malinvestment from wall street than to evoke fields of biology we know literally nothing about?

ainch a day ago | parent | prev | next [-]

There is a lot of overlap between AI and Neuroscience, especially among older researchers. For example Karpathy's PhD supervisor, Fei-Fei Li, researched vision in cat brains before working on computer vision, Demis Hassabis did his PhD in Computational Neuroscience, Geoff Hinton studied Psychology etc... There's even the Reinforcement Learning and Decision Making conference (RLDM - very cool!), which pairs Reinforcement Learning with neuro research and brings together people from both disciplines.

I suspect the average AI researcher knows much more about the brain than typical CS students, even if they may not have sufficient background to conduct research.

superconduct123 a day ago | parent [-]

Fair enough, I guess its a bit different nowadays since the background is usually a PhD in compsci

arawde 2 days ago | parent | prev | next [-]

From personal experience making the same comparisons during undergrad, I think it just comes down to the availability of conceptual models. If the brain does X, there's a good chance that a computer does something that looks like X, or that X could be recreated through steps Y & Z, etc.

Once I started to realize just how much of the brain is inscrutable, because it is a machine operating on chemicals instead of strict electrical processing, I became a lot more reluctant to draw those comparisons

genewitch 2 days ago | parent [-]

Lucky for all of us we're alive during a "quantum" thing! Which has been an idea since at least the mid 1990s as i first saw it in a 2600 around that time...

chasd00 2 days ago | parent | prev | next [-]

> Why is there a presumption that we (as people who have only studied CS) know enough about biology/neuroscience/evolution to make these comparisons/parallels/analogies?

well it's straightforward. First lets assume a spherical, perfectly frictionless, brain..

tim333 a day ago | parent | prev | next [-]

AI researchers and CS people and the rest of us are human brain users and so have some familiarity with them even if they haven't studied neuroscience.

You can make some comparisons between how they perform without really understanding how LLMs or brains work, like to me LLMs seem similar to the part human minds where you say stuff without thinking about it. But you never really get an LLM saying I was thinking about that stuff and figured this bit was wrong, because they don't really have that capability.

giardini 10 hours ago | parent | prev | next [-]

There are plenty of mathematicians, psychologists, philosophers, physicists et al that are listening in. Perhaps one day, one or more of these will drop the (probably math) that will achieve critical mass (AGI).

There are two periods in history that "feel" like this time to me: - prior to Einstein's theory of relativity and - the uncovering of quantum mechanics.

In both cases bits and pieces of math and science were floating in the air but no one could connect them. It took teams of people/individuals and years of arduous effort to pull it all together.

Today there are a lot more participants. Main difference seems that a lot of them seem to be capitalists!8-))

jjulius 2 days ago | parent | prev | next [-]

>Why is there a presumption that we (as people who have only studied CS) know enough about biology/neuroscience/evolution to make these comparisons?

Hubris.

rootusrootus 2 days ago | parent | next [-]

Exactly. Someone way back when decided to call them neural networks, and now a lot of people think that they are a good representation of the real thing. If we make them fast enough, powerful enough, we'll end up with a brain!

Or not.

karmakaze a day ago | parent | next [-]

There was an actual simulation of a brain that could respond appropriately to stimuli. It ran many orders of magnitude slower than real-time but demonstrated the correlation. Probably not using the DNNs that we use now, but still a machine.

voidhorse 2 days ago | parent | prev [-]

I wish McCulloch and Pitts could see how much intellectual damage that wildly bold analogy they made would do. (though seeing as they seemingly had no qualms with issuing such a wildly unjustified analogy with the absolute paucity of scientific information they had at the time, I guess they'd be happy about it overall).

__loam a day ago | parent [-]

Computational neurons were developed with the express intent of studying models of the brain based on the contemporary understanding of neuroscience. That understanding has evolved massively over the last 7 decades and meanwhile the concept of the perceptron has proven to be a useful mathematical construct in machine learning and statistical computing. I blame the modern business culture if software development more than I blame dead scientists for the misunderstanding being peddled to the public.

voidhorse 20 hours ago | parent [-]

I also blame the modern business culture more, but we shouldn't act like McCulloch and Pitts were innocent. They well could have introduced neural nets without making the wild claims they did about actual neural equivalence. They are largely responsible for much of the brain = computer naivety and, in my view, they put forward this claim with shockingly little justification. The reasoned analogically without actually understanding the things they were trying to analogize. They basically took something that had the status of hypothesis at best and used it in the same manner one might if one had understanding.

To be clear, I'm not at all criticizing their technical contribution. Neural nets obviously are an important technical approach to computation—however we should criticize the attendant philosophical and neurological and biological claims they attached to their study, which lacked sufficient justification.

ctoth 2 days ago | parent | prev [-]

The hubris here isn't CS people making comparisons, it's assuming biological substrate matters. Your brain is doing computation with neurotransmitters instead of transistors. So what? The "chemicals not electricity" distinction is pure carbon chauvinism, like insisting hydraulic computers can't be compared to electronic ones because water isn't electricity. Evolution didn't discover some mystical process that imbues meat with special properties; it just hill-climbed to a solution using whatever materials were available. Brains work despite being kludges of evolutionary baggage, not because biology unlocked some deeper truth about intelligence.

Meanwhile, these systems translate languages, write code, play Go at superhuman levels, and pass medical licensing exams... all tasks you'd have sworn required "real understanding" a decade ago. At some point, look at the goddamn scoreboard. If you think there's something brains can do that these architectures fundamentally can't, name it specifically instead of gesturing vaguely at "inscrutability." The list of "things only biological brains can do" keeps shrinking, and your objection keeps sounding like "but my substrate is special!!1111"

j-krieger a day ago | parent | next [-]

> Your brain is doing computation with neurotransmitters instead of transistors.

This is an incredible simplification of the process and also just a small part of it. There is increasing evidence that quantum effects might play a part in the inner workings of the brain.

> Brains work despite being kludges of evolutionary baggage, not because biology unlocked some deeper truth about intelligence.

Now that is hubris.

GoatInGrey 2 days ago | parent | prev | next [-]

This seems naively dismissive of arguments around substrates considering that playing "Go at superhuman levels" took 1MW of energy versus the 1-2 (or if you want to assume 100% of the brain was applied to the game, 20) watts consumed by the human brain.

__loam a day ago | parent [-]

How many examples did each system need to get good at the task too? It's currently a lot less for humans and we don't know why.

JumpCrisscross 2 days ago | parent | prev | next [-]

> Your brain is doing computation with neurotransmitters instead of transistors

If it is, sure. But this isn't a given. We don't actually understand how the brain computes, as evidenced by our inability to simulate it.

> Evolution didn't discover some mystical process that imbues meat with special properties

Sure. But the complexity remains beyond our comprehension. Against the (nearly) binary action potential of a transmitter we have a multidimensional electrochemical system in the brain which isn't trivially reduced to code resembling anything we can currently execute on a transistor substrate.

> hese systems translate languages, write code, play Go at superhuman levels, and pass medical licensing exams... all tasks you'd have sworn required "real understanding" a decade ago

Straw man. Who said this? If anything, the symbolic linguists have been overpromising on this front since the 1980s.

ben_w 12 hours ago | parent | next [-]

> Straw man. Who said this? If anything, the symbolic linguists have been overpromising on this front since the 1980s.

I'm sure I've seen people say this about language translation and playing go. Ditto chess, way back before Kasparov lost. I don't think I've seen anyone so specific as to say that about medical licensing exams, nor as vague as "write code", but on the latter point I do even now see people saying that software engineering is safe forever with various arguments given…

JumpCrisscross 10 hours ago | parent [-]

Fair enough. I’m not going to argue nobody said anything. What I’ll contest is that anyone of consequence said it with consequence. These beliefs didn’t slow down the field. They didn’t stop it from raising capital or attracting engineers.

ctoth 2 days ago | parent | prev [-]

Jonas & Kording showed that neuroscience methods couldn't reverse-engineer a simple 6502 processor [0]. If the tools can't crack a system we built and fully documented, our inability to simulate brains just means we're ignorant, not that substrate is magic. It also doesn't necessarily say great things for neuroscience!

And "who said this?"... come on. Searle, Dreyfus, thirty years of "syntax isn't semantics," all the hand-wringing about how machines can't really understand because they lack intentionality. Now systems pass those benchmarks and suddenly it's "well nobody serious ever thought that mattered." This is the third? fourth? tenth? round of goalpost-moving while pretending the previous positions never existed.

Pointing at "multidimensional electrochemical complexity" is just phlogiston with better vocabulary. Name something specific transformers can't do?

[0] https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...

JumpCrisscross 2 days ago | parent | next [-]

> If the tools can't crack a system we built and fully documented, our inability to simulate brains just means we're ignorant, not that substrate is magic

Nobody said the substrate is magic. Just that it isn't understood. Plenty of CS folks have also been trying to simulate a brain. We haven't figured it out. The same logic that tells you the neuroscientific model is broken at some level should inform that the brains-as-computers model is similarly deficient.

> Pointing at "multidimensional electrochemical complexity" is just phlogiston with better vocabulary

Sorry, have you figured out how to simulate a brain?

Multidimensional because you have more than one signalling chemical. Electrochemical because you can't just watch what the electrons are doing.

> Name something specific transformers can't do?

That what can't do. A neuron? A neurotransmitter-receptor system? We literally can't simulate these systems beyond toy models. We don't even know what the essential parts are--can you safely lump together N neutransmitter molecules? What's N? We're still discovering new ion channels?!

ben_w 12 hours ago | parent | prev | next [-]

> just phlogiston with better vocabulary

So, a decent approximation that only turned out to be wrong when we looked closely and found the mass flow was in the opposite direction, but otherwise the model basically worked?

That would be fantastic!

sambapa 2 days ago | parent | prev | next [-]

So everyone in neuroscience is ignorant but not you?

JumpCrisscross 2 days ago | parent [-]

There is a lot of hocus pocus in neuroscience. Next to psychology, anthropology and macroeconomics.

That doesn’t make the field useless nor OP’s point correct.

voidhorse 2 days ago | parent | prev [-]

I'm curious what you think understanding means.

I personally do not think operational proficiency and understanding are equivalent.

I can do many things in life pretty well without understanding them. The phenomenon of understanding seems distinct from the phenomenon of doing something/acting proficiently.

jjulius 2 days ago | parent | prev | next [-]

Case in point.

__loam a day ago | parent | prev [-]

There is no evidence that neurons have remotely the same computational mechanism as a transistor.

Memorizing billions of answers from the training set also isn't that impressive.

aughtdev 2 days ago | parent | prev | next [-]

Yeah, the last 3 years of "We now know how to build AGI" failing to deliver shows that there's something being missed about the nature of intelligence. The "We are all stochastic parrots" people has been awfully quiet recently

rhetocj23 2 days ago | parent | prev | next [-]

Ive also found this jarring and it speaks to the hubris of folks that have emerged in the past few decades who dont seem to have much relation to the humanities and liberal arts.

a day ago | parent | prev [-]
[deleted]