Remix.run Logo
ordu 8 days ago

> LLMs are not by themselves sufficient as a path to general machine intelligence; in some sense they are a distraction because of how far you can take them despite the approach being fundamentally incorrect.

I don't believe that it is a fundamentally incorrect approach. I believe, that human mind does something like that all the time, the difference is our minds have some additional processes that can, for example, filter hallucinations.

Kids at a specific age range are afraid of their imagination. Their imagination can place a monster into any dark place where nothing can be seen. Adult mind can do the same easily, but the difference is kids have difficulties distinguishing imagination and perception, while adult generally manage.

I believe, the ability of human mind to see difference between imagination/hallucinations from one hand and perception and memory from the other is not a fundamental thing stemming from the architecture of brains but a learned skill. Moreover people can be tricked to acquire false memory[1]. If LLM fell to tricks of Elizabet Loftus, we'd say LLM hallucinated.

What LLMs need is to learn some tricks to detect hallucinations. Probably they will not get 100% reliable detector, but to get to the level of humans they don't need 100% reliability.

TazeTSchnitzel 8 days ago | parent | next [-]

I have recently lived through something called a psychotic break, which was an unimaginably horrible thing, but it did let me see from the inside what insanity does to your thinking. And what's fascinating, coming out the other side of this, is how similar LLMs are to someone in psychosis. Someone in psychosis can have all the ability LLMs have to recognise patterns and sound like they know what they're talking about, but their brain is not working well enough to have proper self-insight, to be able to check their thoughts actually fully make sense. (And “making sense” turns out to be a sliding scale — it is not as if you just wake up one day suddenly fully rational again, there's a sliding scale of irrational thinking and you have to gradually re-process your older thoughts into more and more coherent shapes as your brain starts to work more correctly again.) I believe this isn't actually a novel insight either, many have worried about this for years! Psychosis might be an interesting topic to read about if you want to get another angle to understand the AI models from. I won't claim that it's exactly the same thing, but I will say that most people probably have a very undeveloped idea of what mental illness actually is or how it works, and that leaves them badly prepared for interacting with a machine that has a strong resemblance to a mentally ill person who's learned to pretend to be normal.

rauljara 7 days ago | parent | next [-]

Thank you for sharing, and sorry you had to go through that. I had a good friend go through a psychotic break and I spent a long time trying to understand what was going on in his brain. The only solid conclusion I could come to was that I could not relate to what he was going through, but that didn’t change that he was obviously suffering and needed whatever support I could offer. Thanks for giving me a little bit of insight into his brain. Hope you were/are able to find support out there.

johnisgood 7 days ago | parent | prev [-]

If we just take simply a panic attack, many people have no clue what or how it feels like, which is unfortunate, because they lack empathy for those who do experience it. My psychiatrists definitely need to experience it to understand.

mac-mc 8 days ago | parent | prev | next [-]

Do you have many memories of that time, around 3 to 5, and remember what your cognitive processes were?

When the child is afraid of the monster in the dark, they are not literally visually hallucinating a beast in the dark; they are worried that there could be a beast in the dark, and they are not sure that there is due to a lack of sensory information confirming a lack of the monster. They are not being hyper precise because they are 3, so they say "there is a monster under my bed"! Children have instincts to be afraid of the dark.

Similarly with imaginary friends and play, it's an instinct to practice through smaller stakes simulations. When they are emotionally attached to their imaginary friends, it's much like they are emotionally attached to their security blanket. They know that the "friend" is not perceptible.

It's much like the projected anxieties of adults or teenagers, who are worried that everyone thinks they are super lame and thus act like people do, because on the balance of no information, they choose the "safer path".

That is pretty different than the hallucinations of LLMs IMO.

bayindirh 7 days ago | parent | prev | next [-]

From my perspective, the fundamental problem arises from the assumption that brain's all functions are self contained, however there are feedback loops in the body which supports the functions of the brain.

The simplest one is fight/flight/freeze. Brain starts the process by being afraid, and hormones gets released, but next step is triggered by the nerve feedback coming from the body. If you are using beta-blockers and can't get panicked, the initial trigger fizzles and you return to your pre-panic state.

an LLM doesn't model a complete body. It just models the language. It's just a very small part of what brain handles, so assuming that modelling the language, even the whole brain gonna answer all the questions we have is a flawed approach.

Latest research shows body is a much more complicated and interconnected system than we learnt in school 30 years ago.

mft_ 7 days ago | parent [-]

Sure, your points about the body aren’t wrong, but (as you say) LLMs are only modelling a small subset of a brain’s functions at the moment: applied knowledge, language/communication, and recently interpretation of visual data. There’s no need or opportunity for an LLM (as they currently exist) to do anything further. Further, just because additional inputs exist in the human body (gut-brain axis, for example) it doesn’t mean that they are especially (or at all) relevant for knowledge/language work.

TheOtherHobbes 7 days ago | parent [-]

The point is that knowledge/language work can't work reliably unless it's grounded in something outside of itself. Without it you don't get an oracle, you get a superficially convincing but fundamentally unreliable idiot savant who lacks a stable sense of self, other, or real world.

The fundamental foundation of science and engineering is reliability.

If you start saying reliability doesn't matter, you're not doing science and engineering any more.

mft_ 7 days ago | parent [-]

I'm really struggling to understand what you're trying to communicate here; I'm even wondering if you're an LLM set up to troll, due to the weird language and confusing non-sequiturs.

> The point is that knowledge/language can't work reliably unless it's grounded in something outside of itself.

Just, what? Knowledge is facts, somehow held within a system allowing recall and usage of those facts. Knowledge doesn't have a 'self', and I'm totally not understanding how pure knowledge as a concept or medium needs "grounding"?

Being charitable, it sounds more like you're trying to describe "wisdom" - which might be considered as a combination of knowledge, lived experience, and good judgement? Yes, this is valuable in applying knowledge more usefully, but has nothing to do with the other bodily systems which interact with the brain, which is where you started?

> The fundamental foundation of science and engineering is reliability.

> If you start saying reliability doesn't matter, you're not doing science and engineering any more.

No-one mentioned reliability - not you in your original post, or me in my reply. We were discussing whether the various (unconscious) systems which link to the brain in the human body (like the gut:brain axis) might influence its knowledge/language/interpretation abilities.

shkkmo 8 days ago | parent | prev | next [-]

> If LLM fell to tricks of Elizabet Loftus, we'd say LLM hallucinated.

She's strongly oversold how and when false memories can be created. She testified in defense of Ghislaine Maxwell at her 2021 trial that financial incentives can create false memories and only later admitted that there were no studies to back this up when directly questioned.

She's spent a career over-generalizing data about implanting false minor memories to make money discrediting victims' traumatic memories and defend abusers.

You conflate "hallucination" with "imagination" but the former has much more in common with lieing than it does with imagining.

taneq 7 days ago | parent [-]

> She testified in defense of Ghislaine Maxwell at her 2021 trial that financial incentives can create false memories and only later admitted that there were no studies to back this up when directly questioned.

Did she have financial incentives? Was this a live demonstration? :P

Mikhail_Edoshin 7 days ago | parent | prev | next [-]

You probably know the Law of Archimedes. Many people do. But do you know it in the same way Archimedes did? No. You were told the law, then taught how to apply it. But Archimedes discovered it without any of that.

Can we repeat the feat of Archimedes? Yes, we can, but first we'd have to forget what we were told and taught.

The way we actually discover things is very different from amassing lots of hearsay. Indeed, we do have an internal part that behaves the same way LLM does. But to get to the real understanding we actually shut down that part, forget what we "know", start from a clean slate. That part does not help us think; it helps us to avoid thinking. The reason it exists is that it is useful: thinking is hard and slow, but recalling is easy and fast. But it not thinking; it is the opposite.

ordu 7 days ago | parent [-]

> But to get to the real understanding we actually shut down that part, forget what we "know", start from a clean slate.

Close, but not exactly. To start from a clean slate is not very difficult, the trick is to reject some chosen parts of existing knowledge, or more specifically the difficulty is to choose what to reject. Starting from a clean slate you'll end up spending millennia to get the knowledge you've just rejected.

So the overall process of generating knowledge is to look under the streetlight till finding something new becomes impossible or too hard, and then you start experimenting with rejecting some bits of your knowledge to rethink them. I was taught to read works of Great Masters of the past critically, trying to reproduce their path while looking for forks where you can try to go the other way. It is a little bit like starting from a clean slate, but not exactly.

otabdeveloper4 7 days ago | parent | prev [-]

> I believe, that human mind does something like that all the time

Absolutely not. Human brains have online one-shot training. LLMs weights are fixed and fine-tuning them is a huge multi-year enterprise.

Fundamentally it's two completely different architectures.

ordu 7 days ago | parent [-]

I really don't like how you rejecting the idea completely. People have online one-shot training, but have you tried to learn how to play on piano? To learn it you need a lot of repetitions. Really a lot. You need a lot of repetitions to learn how to walk, or how to do arithmetic, or how to read English. This is very similar to LLMs, isn't it? So they are not completely different architectures, aren't they? It is more like human brains have something on top of "LLM" that allows it to do tricks that LLMs couldn't do.

otabdeveloper4 7 days ago | parent [-]

> This is very similar to LLMs, isn't it?

No, it isn't at all. The effort humans spend on rote learning is to optimize mechanical precision in performance, not to internalize the concepts.

The concepts of playing the piano you can learn in a couple days. All the rest of the effort is about getting synchronization and timing right.