▲ | ordu 8 days ago | |||||||||||||||||||||||||
> LLMs are not by themselves sufficient as a path to general machine intelligence; in some sense they are a distraction because of how far you can take them despite the approach being fundamentally incorrect. I don't believe that it is a fundamentally incorrect approach. I believe, that human mind does something like that all the time, the difference is our minds have some additional processes that can, for example, filter hallucinations. Kids at a specific age range are afraid of their imagination. Their imagination can place a monster into any dark place where nothing can be seen. Adult mind can do the same easily, but the difference is kids have difficulties distinguishing imagination and perception, while adult generally manage. I believe, the ability of human mind to see difference between imagination/hallucinations from one hand and perception and memory from the other is not a fundamental thing stemming from the architecture of brains but a learned skill. Moreover people can be tricked to acquire false memory[1]. If LLM fell to tricks of Elizabet Loftus, we'd say LLM hallucinated. What LLMs need is to learn some tricks to detect hallucinations. Probably they will not get 100% reliable detector, but to get to the level of humans they don't need 100% reliability. | ||||||||||||||||||||||||||
▲ | TazeTSchnitzel 8 days ago | parent | next [-] | |||||||||||||||||||||||||
I have recently lived through something called a psychotic break, which was an unimaginably horrible thing, but it did let me see from the inside what insanity does to your thinking. And what's fascinating, coming out the other side of this, is how similar LLMs are to someone in psychosis. Someone in psychosis can have all the ability LLMs have to recognise patterns and sound like they know what they're talking about, but their brain is not working well enough to have proper self-insight, to be able to check their thoughts actually fully make sense. (And “making sense” turns out to be a sliding scale — it is not as if you just wake up one day suddenly fully rational again, there's a sliding scale of irrational thinking and you have to gradually re-process your older thoughts into more and more coherent shapes as your brain starts to work more correctly again.) I believe this isn't actually a novel insight either, many have worried about this for years! Psychosis might be an interesting topic to read about if you want to get another angle to understand the AI models from. I won't claim that it's exactly the same thing, but I will say that most people probably have a very undeveloped idea of what mental illness actually is or how it works, and that leaves them badly prepared for interacting with a machine that has a strong resemblance to a mentally ill person who's learned to pretend to be normal. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | mac-mc 8 days ago | parent | prev | next [-] | |||||||||||||||||||||||||
Do you have many memories of that time, around 3 to 5, and remember what your cognitive processes were? When the child is afraid of the monster in the dark, they are not literally visually hallucinating a beast in the dark; they are worried that there could be a beast in the dark, and they are not sure that there is due to a lack of sensory information confirming a lack of the monster. They are not being hyper precise because they are 3, so they say "there is a monster under my bed"! Children have instincts to be afraid of the dark. Similarly with imaginary friends and play, it's an instinct to practice through smaller stakes simulations. When they are emotionally attached to their imaginary friends, it's much like they are emotionally attached to their security blanket. They know that the "friend" is not perceptible. It's much like the projected anxieties of adults or teenagers, who are worried that everyone thinks they are super lame and thus act like people do, because on the balance of no information, they choose the "safer path". That is pretty different than the hallucinations of LLMs IMO. | ||||||||||||||||||||||||||
▲ | bayindirh 7 days ago | parent | prev | next [-] | |||||||||||||||||||||||||
From my perspective, the fundamental problem arises from the assumption that brain's all functions are self contained, however there are feedback loops in the body which supports the functions of the brain. The simplest one is fight/flight/freeze. Brain starts the process by being afraid, and hormones gets released, but next step is triggered by the nerve feedback coming from the body. If you are using beta-blockers and can't get panicked, the initial trigger fizzles and you return to your pre-panic state. an LLM doesn't model a complete body. It just models the language. It's just a very small part of what brain handles, so assuming that modelling the language, even the whole brain gonna answer all the questions we have is a flawed approach. Latest research shows body is a much more complicated and interconnected system than we learnt in school 30 years ago. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | shkkmo 8 days ago | parent | prev | next [-] | |||||||||||||||||||||||||
> If LLM fell to tricks of Elizabet Loftus, we'd say LLM hallucinated. She's strongly oversold how and when false memories can be created. She testified in defense of Ghislaine Maxwell at her 2021 trial that financial incentives can create false memories and only later admitted that there were no studies to back this up when directly questioned. She's spent a career over-generalizing data about implanting false minor memories to make money discrediting victims' traumatic memories and defend abusers. You conflate "hallucination" with "imagination" but the former has much more in common with lieing than it does with imagining. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | Mikhail_Edoshin 7 days ago | parent | prev | next [-] | |||||||||||||||||||||||||
You probably know the Law of Archimedes. Many people do. But do you know it in the same way Archimedes did? No. You were told the law, then taught how to apply it. But Archimedes discovered it without any of that. Can we repeat the feat of Archimedes? Yes, we can, but first we'd have to forget what we were told and taught. The way we actually discover things is very different from amassing lots of hearsay. Indeed, we do have an internal part that behaves the same way LLM does. But to get to the real understanding we actually shut down that part, forget what we "know", start from a clean slate. That part does not help us think; it helps us to avoid thinking. The reason it exists is that it is useful: thinking is hard and slow, but recalling is easy and fast. But it not thinking; it is the opposite. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | otabdeveloper4 7 days ago | parent | prev [-] | |||||||||||||||||||||||||
> I believe, that human mind does something like that all the time Absolutely not. Human brains have online one-shot training. LLMs weights are fixed and fine-tuning them is a huge multi-year enterprise. Fundamentally it's two completely different architectures. | ||||||||||||||||||||||||||
|