| ▲ | adamzwasserman 3 days ago |
| The article misses three critical points: 1. Conflates consciousness with "thinking" - LLMs may process information effectively without being conscious, but the article treats these as the same phenomenon 2. Ignores the cerebellum cases - We have documented cases of humans leading normal lives with little to no brain beyond a cerebellum, which contradicts simplistic "brain = deep learning" equivalences 3. Most damning: When you apply these exact same techniques to anything OTHER than language, the results are mediocre. Video generation still can't figure out basic physics (glass bouncing instead of shattering, ropes defying physics). Computer vision has been worked on since the 1960s - far longer than LLMs - yet it's nowhere near achieving what looks like "understanding." The timeline is the smoking gun: vision had decades of head start, yet LLMs leapfrogged it in just a few years. That strongly suggests the "magic" is in language itself (which has been proven to be fractal and already heavily compressed/structured by human cognition) - NOT in the neural architecture.
We're not teaching machines to think. We're teaching them to navigate a pre-existing map that was already built. |
|
| ▲ | kenjackson 3 days ago | parent | next [-] |
| "vision had decades of head start, yet LLMs leapfrogged it in just a few years." From an evolutionary perspective though vision had millions of years head start over written language. Additionally, almost all animals have quite good vision mechanisms, but very few do any written communication. Behaviors that map to intelligence don't emerge concurrently. It may well be there are different forms of signals/sensors/mechanical skills that contribute to emergence of different intelligences. It really feels more and more like we should recast AGI as Artificial Human Intelligence Likeness (AHIL). |
| |
| ▲ | adamzwasserman 3 days ago | parent | next [-] | | From a terminology point of view, I absolutely agree. Human-likeness is what most people mean when they talk about AGI. Calling it what it is would clarify a lot of the discussions around it. However I am clear that I do not believe that this will ever happen, and I see no evidence to convince that that there is even a possibility that it will. I think that Wittgenstein had it right when he said: "If a lion could speak, we could not understand him." | | |
| ▲ | andoando 3 days ago | parent [-] | | >I think that Wittgenstein had it right when he said: "If a lion could speak, we could not understand him." Why would we not? We live in the same physical world and encounter the same problems. | | |
| ▲ | adamzwasserman 3 days ago | parent | next [-] | | You're actually proving Wittgenstein's point. We share the same physical world, but we don't encounter the same problems. A lion's concerns - territory, hunting, pride hierarchy - are fundamentally different from ours: mortgages, meaning, relationships. And here's the kicker: you don't even fully understand me, and I'm human. What makes you think you'd understand a lion? | | |
| ▲ | beeflet 3 days ago | parent | next [-] | | Humans also have territory, hunting and hierarchy. Everything that a lion does, humans also do but more complicated. So I think we would be able to understand the new creature. But the problem is really that the lion that speaks is not the same creature as the lion we know. Everything the lion we know wants to say can already be said through its body language or current faculties. The goldfish grows to the size of its container. | | |
| ▲ | adamzwasserman 3 days ago | parent [-] | | You've completely missed Wittgenstein's point. It's not about whether lions and humans share some behaviors - it's about whether they share the form of life that grounds linguistic meaning. | | |
| ▲ | zeroonetwothree 3 days ago | parent [-] | | I think humans would be intelligent enough to understand the lion's linguistic meaning (after some training). Probably not the other way around. But it's a speculative argument, there's no real evidence one way or another. |
|
| |
| ▲ | andoando 3 days ago | parent | prev [-] | | Thats only a minor subset of our thoughts. If you were going hiking what kind of thoughts would you have? "There are trees there", "Its raining I should get cover", "I can hide in the bushes", "Im not sure if I cna climb over this or not". "There is x on the left and y on the right", "the wind went away" etc etc etc etc. The origins of human language were no doubt communicating such simple thoughts and not about your deep inner psyche and the complexities of the 21st century. There's actually quite a bit of evidence that all language, even complex words, are rooted in spatial relationships. | | |
| ▲ | adamzwasserman 2 days ago | parent [-] | | You're describing perception, not the lived experience that gives those perceptions meaning. Yes, a lion sees trees and rain. But a lion doesn't have 'hiking', it has territory patrol. It doesn't 'hide in bushes', it stalks prey. These aren't just different words for the same thing; they're fundamentally different frameworks for interpreting raw sensory data. That's Wittgenstein's point about form of life. | | |
| ▲ | andoando 2 days ago | parent [-] | | Why do you assume they're fundamentally different frameworks? Just because wittgenstein said it? |
|
|
| |
| ▲ | goatlover 3 days ago | parent | prev [-] | | We haven't been able to decode what whales and dolphins are communicating. Are they using language? A problem SETI faces is whether we would be able to decode an alien signal. They may be too different in their biology, culture and technology. The book & movie Contact propose that math is a universal language. This assumes they're motivated to use the same basic mathematical structures we do. Maybe they don't care about prime numbers. Solaris by Stanislaw Lem explores an alien ocean that so different humans utterly fail to communicate with it, leading to the ocean creating humans from memories in brain scans broadcast over the ocean, but it's never understood why the ocean did this. The recreated humans don't know either. | | |
| ▲ | adamzwasserman 3 days ago | parent [-] | | The whole "math is a universal" language is particularly laughable to me considering it is a formal system and the universe is observably irregular. As I am wont to say: regularity is only ever achieved at the price of generality. | | |
| ▲ | zeroonetwothree 3 days ago | parent | next [-] | | Many mathematical structures are 'irregular'. That's not a very strong argument against math as a universal descriptor. | | | |
| ▲ | andoando 2 days ago | parent | prev [-] | | Think about what math is trying to formalize | | |
| ▲ | adamzwasserman 2 days ago | parent [-] | | Math formalizes regularities by abstracting away irregularities - that's precisely my point. Any formal system achieves its regularity by limiting its scope. Math can describe aspects of reality with precision, but it cannot capture reality's full complexity. A 'universal language' that can only express what fits into formal systems isn't universal at all: it's a specialized tool that works within constrained domains. |
|
|
|
|
| |
| ▲ | Retric 3 days ago | parent | prev [-] | | This is all really arbitrary metrics across such wildly different fields. IMO LLMs are where computer vision was 20+ years ago in terms of real world accuracy. Other people feel LLMs offer far more value to the economy etc. | | |
| ▲ | adamzwasserman 3 days ago | parent [-] | | I understand the temptation to compare LLMs and computer vision, but I think it’s misleading to equate generative AI with feature-identification or descriptive AI systems like those in early computer vision. LLMs, which focus on generating human-like text and reasoning across diverse contexts, operate in a fundamentally different domain than descriptive AI, which primarily extracts patterns or features from data, like early vision systems did for images. Comparing their 'real-world accuracy' oversimplifies their distinct goals and applications. While LLMs drive economic value through versatility in language tasks, their maturity shouldn’t be measured against the same metrics as descriptive systems from decades ago. | | |
| ▲ | Retric 3 days ago | parent [-] | | I don’t think it’s an oversimplification as accuracy is what constrains LLMs across so many domains. If you’re a wealthy person asking ChatGPT to write a prenup or other contract to use would be an act of stupidity unless you vetted it with an actual lawyer. My most desired use case is closer, but LLMs are still more than an order of magnitude below what I am willing to tolerate. IMO that’s what maturity means in AI systems. Self driving cars aren’t limited by the underlying mechanical complexity, it’s all about the long quest for a system to make reasonably correct decisions hundreds of times a second for years across widely varying regions and weather conditions. Individual cruse missiles on the other hand only needed to operate across a single short and pre-mapped flight in specific conditions, therefore they used visual navigation decades earlier. | | |
| ▲ | adamzwasserman 2 days ago | parent [-] | | You're conflating two different questions. I'm not arguing LLMs are mature or reliable enough for high-stakes tasks. My argument is about why they produce output that creates the illusion of understanding in the language domain, while the same techniques applied to other domains (video generation, molecular modeling, etc.) don't produce anything resembling 'understanding' despite comparable or greater effort. The accuracy problems you're describing actually support my point: LLMs navigate linguistic structures effectively enough to fool people into thinking they understand, but they can't verify their outputs against reality. That's exactly what you'd expect from a system that only has access to the map (language) and not the territory (reality). | | |
| ▲ | Retric 2 days ago | parent [-] | | I’m not saying these tasks are high stakes so much as they inherently require high levels of accuracy. Programmers can improve code so the accuracy threshold for utility is way lower when someone is testing before deployment. That difference exists based on how you’re trying to use it independent of how critical the code actually is. The degree to which LLMs successfully fake understanding depends heavily on how much accuracy you’re looking for. I’ve judged their output as gibberish on a task someone else felt it did quite well. If anything they make it clear how many people just operate on vague associations without any actual understanding of what’s going on. In terms of map vs territory, LLMs get trained on a host of conflicting information but they don’t synthesize that into uncertainty. Ask one what the average distance between the earth and the moon and you’ll get a number because the form of the response in training data is always a number, look at several websites and you’ll see a bunch of different numbers literally thousands of miles apart which seems odd as we know the actual distance at any moment to well within an inch. Anyway, the inherent method of training is simply incapable of that kind of analysis. The average lunar distance is approximately 385,000 km https://en.wikipedia.org/wiki/Lunar_distance
The average distance between the Earth and the Moon is 384 400 km (238 855 miles). https://www.rmg.co.uk/stories/space-astronomy/how-far-away-moon
The Moon is approximately 384,000 km (238,600 miles) away from Earth, on average. https://www.britannica.com/science/How-Far-Is-the-Moon-From-Earth
The Moon is an average of 238,855 miles (384,400 km) away. https://spaceplace.nasa.gov/moon-distance/en/
The average distance to the Moon is 382,500 km
https://nasaeclips.arc.nasa.gov/shared_assets/resources/distance-to-the-moon/438170main_GLDistancetotheMoon.pdf
|
|
|
|
|
|
|
| ▲ | eloisant 3 days ago | parent | prev | next [-] |
| This is why I'm very skeptical about the "Nobel prize level" claims. To win a Nobel prize you would have to produce something completely new. LLM will probably be able to reach a Ph.D. level of understanding existing research, but bringing something new is a different matter. |
| |
| ▲ | adamzwasserman 3 days ago | parent | next [-] | | LLMs do not understand anything. They have a very complex multidimensional "probability table" (more correctly a compressed geometric representation of token relationships) that they use to string together tokens (which have no semantic meaning), which then get converted to words that have semantic meaning to US, but not to the machine. | | |
| ▲ | DoctorOetker 3 days ago | parent | next [-] | | Consider your human brain, and the full physical state, all the protons and neutrons some housed together in the same nucleus, some separate, together with all the electrons. Physics assigns probabilities to future states. Suppose you were in the middle of a conversation and about to express a next syllable (or token). That choice will depend on other choices ("what should I add next"), and further choices ("what is the best choice of words to express the thing I chose to express next etc. The probabilities are in principle calculable given a sufficiently detailed state. You are correct that LLM's correspond to a probability distribution (given you immediately corrected to say that this table is implicit and parametrized by a geometric token relationships.). But so does every expressor of language, humans included. The presence or absence of understanding can't be proven by mere association of with a "probability table", especially if such probability table is exactly expected from the perspective of physics, and if the models have continuously gained better and better performance by training them directly on human expressions! | |
| ▲ | tomfly 3 days ago | parent | prev | next [-] | | Exactly. It’s been stated for a long time, before llms. For instance this paper https://home.csulb.edu/~cwallis/382/readings/482/searle.mind...
Describes a translator who doesn’t know the language. | |
| ▲ | KoolKat23 3 days ago | parent | prev [-] | | In abstract we do the exact same thing | | |
| ▲ | adamzwasserman 3 days ago | parent | next [-] | | Perhaps in practice as well. It is well-established that our interaction with language far exceeds what we are conscious of. | | | |
| ▲ | tomfly 3 days ago | parent | prev [-] | | It’s hard to believe this when the llm “knows” so much more then us yet still can not be creative outside its training distribution | | |
| ▲ | KoolKat23 3 days ago | parent | next [-] | | When are we as humans creative outside our training data? It's very rare we actually discover something truly novel. This is often random, us stumbling onto it, brute force or purely by being at the right place at the right time. On the other hand, until it's proven it'd likely be considered a hallucination. You need to test something before you can dismiss it. (They did burn witches for discoveries back in the day, deemed witchcraft). We also reduce randomness and pre-train to avoid overfitting. Day to day human creative outputs as humans are actually less exciting when you think about it further, we build on pre-existing knowledge. No different to good prompt output with the right input. Humans are just more knowledgeable & smarter at the moment. | |
| ▲ | adamzwasserman 3 days ago | parent | prev [-] | | The LLM doesn't 'know' more than us - it has compressed more patterns from text than any human could process. That's not the same as knowledge. And yes, the training algorithms deliberately skew the distribution to maintain coherent output - without that bias toward seen patterns, it would generate nonsense. That's precisely why it can't be creative outside its training distribution: the architecture is designed to prevent novel combinations that deviate too far from learned patterns. Coherence and genuine creativity are in tension here |
|
|
| |
| ▲ | KoolKat23 3 days ago | parent | prev [-] | | Given a random prompt, the overall probability of seeing a specific output string is almost zero, since there are astronomically many possible token sequences. The same goes for humans. Most awards are built on novel research built on pre-existing works. This a LLM is capable of doing. | | |
| ▲ | adamzwasserman 3 days ago | parent [-] | | LLMs don't use 'overall probability' in any meaningful sense. During training, gradient descent creates highly concentrated 'gravity wells' of correlated token relationships - the probability distribution is extremely non-uniform, heavily weighted toward patterns seen in training data. The model isn't selecting from 'astronomically many possible sequences' with equal probability; it's navigating pre-carved channels in high-dimensional space. That's fundamentally different from novel discovery. | | |
| ▲ | KoolKat23 3 days ago | parent [-] | | That's exactly the same for humans in the real world. You're focusing too close, abstract up a level. Your point relates to the "micro" system functioning, not the wider "macro" result (think emergent capabilities). | | |
| ▲ | adamzwasserman 3 days ago | parent [-] | | I'm afraid I'd need to see evidence before accepting that humans navigate 'pre-carved channels' in the same way LLMs do. Human learning involves direct interaction with physical reality, not just pattern matching on symbolic representations. Show me the equivalence or concede the point. | | |
| ▲ | KoolKat23 3 days ago | parent [-] | | Language and math are a world model of physical reality. You could not read a book and make sense of it if this were not true. An apple falls to the ground because of? gravity. In real life this is the answer, I'm very sure the pre-carved channel will also lead to gravity. | | |
| ▲ | adamzwasserman 3 days ago | parent [-] | | You're proving my point. You know the word 'gravity' appears in texts about falling apples. An LLM knows that too. But neither you nor the LLM discovered gravity by observing reality and creating new models. You both inherited a pre-existing linguistic map. That's my entire argument about why LLMs can't do Nobel Prize-level work. | | |
| ▲ | KoolKat23 2 days ago | parent [-] | | Well it depends. It doesn't have arms and legs so can't physically experiment in the real world, a human is currently a proxy for that, we can do it's bidding and feedback results though, so it's not really an issue. Most of the time that data is already available to it and they merely need to a prove a thereom using existing historic data points and math. For instance the Black-Scholes-Merton equation which won the Nobel economics prize was derived using preexisting mathematical concepts and mathematical principles. The application and validation relied on existing data. | | |
| ▲ | adamzwasserman 2 days ago | parent [-] | | The Black-Scholes-Merton equation wasn't derived by rearranging words about financial markets. It required understanding what options are (financial reality), recognizing a mathematical analogy to heat diffusion (physical reality), and validating the model against actual market behavior (empirical reality). At every step, the discoverers had to verify their linguistic/mathematical model against the territory. LLMs only rearrange descriptions of discoveries. They can't recognize when their model contradicts reality because they never touch reality. That's not a solvable limitation. It's definitional. We're clearly operating from different premises about what constitutes discovery versus recombination. I've made my case; you're welcome to the last word | | |
| ▲ | KoolKat23 2 days ago | parent [-] | | I understand your viewpoint. LLM's these days have reasoning and can learn in context. They do touch reality, your feedback. It's also proven mathematically. Other people's scientific papers are critiqued and corrected as new feedback arrives. This is no different to claude code bash testing and fixing it's own output errors recursively until the code works. They already deal with unknown combinations all day, our prompting. Yes it is brittle though. They are also not very intelligent yet. |
|
|
|
|
|
|
|
|
|
|
| ▲ | penteract 3 days ago | parent | prev | next [-] |
| There's a whole paragraph in the article which says basically the same as your point 3 ( "glass bouncing, instead of shattering, and ropes defying physics" is literally a quote from the article). I don't see how you can claim the article missed it. |
| |
|
| ▲ | aucisson_masque 3 days ago | parent | prev | next [-] |
| > 2. Ignores the cerebellum cases - We have documented cases of humans leading normal lives with little to no brain beyond a cerebellum, which contradicts simplistic "brain = deep learning" equivalences I went to look for it on Google but couldn't find much. Could you provide a link or something to learn more about ? I found numerous cases of people living without cerebellum but I fail to see how it would justify your reasoning. |
| |
| ▲ | adamzwasserman 3 days ago | parent [-] | | https://npr.org/sections/health-shots/2015/03/16/392789753/a... https://irishtimes.com/news/remarkable-story-of-maths-genius... https://biology.stackexchange.com/questions/64017/what-secti... https://cbc.ca/radio/asithappens/as-it-happens-thursday-edit... | | |
| ▲ | jdadj 3 days ago | parent | next [-] | | "We have documented cases of humans leading normal lives with little to no brain beyond a cerebellum" -- I take this to mean that these are humans that have a cerebellum but not much else. Your npr.org link talks about the opposite -- regular brain, but no cerebellum. Your irishtimes.com link talks about cerebrum, which is not the same as cerebellum. Your biology.stackexchange.com link talks about Cerebral Cortex, which is also not the same as cerebellum. And the cbc.ca link does not contain the string "cere" on the page. | | |
| ▲ | adamzwasserman 3 days ago | parent [-] | | You're right - I mixed up cerebellum/cerebrum/cortex terminology. My bad. The cases I'm referencing are hydrocephalus patients with severely compressed cerebral tissue who maintained normal cognitive function. The point about structural variation not precluding consciousness stands." |
| |
| ▲ | bonsai_spool 3 days ago | parent | prev | next [-] | | Your first example is someone without a cerebellum which is not like the others. The other examples are people with compressed neural tissue but that is not the same as never having the tissue. A being with only a cerebellum could not behave like a human. | | |
| ▲ | adamzwasserman 3 days ago | parent [-] | | You're right - I mixed up cerebellum/cerebrum/cortex terminology. My bad. The cases I'm referencing are hydrocephalus patients with severely compressed cerebral tissue who maintained normal cognitive function. The point about structural variation not precluding consciousness stands. | | |
| ▲ | bonsai_spool a day ago | parent [-] | | > You're right - I mixed up cerebellum/cerebrum/cortex terminology. My bad. The cases I'm referencing are hydrocephalus patients with severely compressed cerebral tissue who maintained normal cognitive function. The point about structural variation not precluding consciousness stands. Hmm, I don't think it would be dogmatic to call hydrocephalus-induced changes 'structural variation.' Structural variation would be the thickness of subcortical bands or something like that - something where if you take 100 people, you'll see some sort of canonical distribution around a population mean. Instead, you're describing a disease-induced change (structural yes, but not variation but instead pathology). We're now in a different regime; we don't expect just any disease to reduce consciousness, so it stands to reason that hydrocephalus would not necessarily reduce consciousness. |
|
| |
| ▲ | 3 days ago | parent | prev [-] | | [deleted] |
|
|
|
| ▲ | bjourne 3 days ago | parent | prev | next [-] |
| > 1. Conflates consciousness with "thinking" - LLMs may process information effectively without being conscious, but the article treats these as the same phenomenon There is NO WAY you can define "consciousness" in such a non-tautological, non-circular way that it includes all humans but excludes all LLMs. |
| |
| ▲ | adamzwasserman 3 days ago | parent | next [-] | | You could have stopped here:
"There is NO WAY you can define "consciousness" | | |
| ▲ | beeflet 3 days ago | parent [-] | | Why not? Consciousness is a state of self-awareness. | | |
| ▲ | Sohcahtoa82 3 days ago | parent | next [-] | | You know you're conscious, but you can't prove the consciousness of anybody around you, nor can you prove your own consciousness to others. To an external observer, another human's brain and body is nothing more than a complex electrical/chemical circuit. They could easily be a P-Zombie [0], a human body with no consciousness inside, but the circuits are running and producing the appearance of consciousness via reactions to stimuli that mimic a conscious human. Theoretically, with sufficient technology, you could take a snapshot of the state of someone's brain and use it to predict exactly how they would react to any given stimulus. Just think about how medications can change the way people behave and the decisions they make. We're all just meat and free will is an illusion. But getting back on topic...my instinct wants to say that a computer cannot become conscious, but it may merely produce an output that resembles consciousness. A computer is merely a rock that we've shaped to do math. I want to say you can't give consciousness to a rock, but then how did we become conscious? My understanding that life began as primordial soup that resulted in self-replicating molecules that formed protein chains, which over millions of years evolved into single-celled life, which then evolved into multi-celled life, and eventually the complex organisms we have today...how did consciousness happen? Somehow, consciousness can arise from non-conscious matter. With that knowledge, I do not think it is impossible for a computer to gain consciousness. But I don't think it'll happen from an LLM. [0] https://en.wikipedia.org/wiki/Philosophical_zombie | | |
| ▲ | beeflet 3 days ago | parent | next [-] | | I do not think there is really such thing as a p-zombie. If you simulate feelings and act on them, that is the same thing as having feelings. Including feelings of self-awareness. | |
| ▲ | zeroonetwothree 3 days ago | parent | prev [-] | | I think p-zombine is inherently self-contradictory. It's impossible to have _exactly_ the same behavior as someone truly conscious without actually being conscious. |
| |
| ▲ | adamzwasserman 3 days ago | parent | prev | next [-] | | If you can define consciousness in a way that is independently verifiable, you should definitely do so. World-wide fame and riches await you. | | |
| ▲ | beeflet 3 days ago | parent | next [-] | | I doubt it, because my definition implies that consciousness is not that interesting. It's just the feeling of self-awareness, which can be independent of actual self awareness. If you have a phantom limb, you feel "conscious" of the extra limb even if it's not a real demonstration of self-awareness. Animal Intelligence is an emergent phenomena resulting from many neurons coordinating. Conciousness is the feeling that all of those subsystems working together as a single thing, even if they aren't | |
| ▲ | Edman274 3 days ago | parent | prev [-] | | Philosophers are known for being rich, that's a claim being made here? |
| |
| ▲ | bena 3 days ago | parent | prev [-] | | To paraphrase Jean Luc Picard: Am I conscious? Why? Can you prove that I am conscious? | | |
| ▲ | Edman274 3 days ago | parent | next [-] | | Maybe Jean Luc Picard should've lost that court case. Obviously we as the audience want to have our heroes win against some super callous guy who wants to kill our hero (and audience stand in for anyone who is neurodivergent) Data, but the argument was pretty weak, because Data often acted in completely alien ways that jeopardized the safety of the crew, and the way that those issues came up was due to him doing things that were not compatible with what we perceive as consciousness. But also, in that episode, they make a point of trying to prove that he was conscious by showing that he engaged in behavior that wasn't goal oriented, like keeping keepsakes and mementos of his friends, his previous relationship with Tasha, and his relationship with his cat. That was an attempt at proving that he was conscious too, but the argument from doubt is tough because how can you prove that a rock is not conscious - and if that can't be proved, should we elevate human rights to a rock? | | |
| ▲ | bena 2 days ago | parent [-] | | First of all, Data never willingly jeopardized the crew. Second, they work alongside actual aliens. Being different is not a disqualification. And Maddox isn't callous, he just doesn't regard Data as anything more than "just a machine". A position he eventually changes over the series as he becomes one of Data's friends. Data is also not a stand in for the neurodivergent. He's the flip of Spock. Spock asks us what if we tried to approach every question from a place of pure logic and repressed all emotion. Data asks us what if we didn't have the option, that we had to approach everything from logic and couldn't even feel emotion. I also feel that equating data to someone who is neurodivergent is kind of insulting as neurodivergent people do have feelings and emotions. But Data was capable of being fully autonomous and could act with agency. Something a rock can't. Data exhibits characteristics we generally accept as conscious. He is not only capable of accessing a large corpus of knowledge, but he is capable of building upon that corpus and generate new information. Ultimately, we cannot prove a rock is not conscious. But, as far as we are able to discern, a rock cannot express a desire. That's the difference. Data expressed a desire. The case was whether or not Starfleet had to respect that desire. | | |
| ▲ | Edman274 2 days ago | parent [-] | | > First of all, Data never willingly jeopardized the crew. This presupposes that he has consciousness. He can only "willingly" do things if he is conscious. If the argument is that there was an external influence that changed his behavior thus making it not volitional then you have to distinguish why the external force makes his Lore behavior unwilling, but Soong's initial programming willing. If I set a thermostat to 85 degrees, would you say that the thermostat is "unwillingly" making people uncomfortable, but at the factory default of 70 degrees, it's helping people feel comfortable? It's difficult to distinguish what is willing and unwilling if consciousness is in question so this feels like begging the question. > I also feel that equating data to someone who is neurodivergent is kind of insulting as neurodivergent people do have feelings and emotions. I'm stating it as an aside / justification for why we want the story to go a certain direction because I see so many articles elevating Data as a heroic representation of neurodivergence. My goal wasntt to be offensive. There are a ton of episodes where Data is puzzled by people's behavior and then someone has to explain it to him almost as if someone is also explaining to the audience it as a morality tale. Remember when Data was struggling to understand how he was lied to? Or how he lost in that strategy game? Or how to be funny? We don't just see him struggle, someone explains to him exactly how he should learn from his experience. That appears to be for the benefit of the android and the people behind the fourth wall. > A rock cannot express a desire. It can if you carve a rock into the words "I want to live" and even though the rock didn't configure itself that way, it's expressing a desire. Noonien Soong built Data, so it's possible that he designed Data to state the desire to be human. Data does seem to have an interiority but he also seems to not have it based on the caprice of outside forces, which is problematic because the way that he is controlled is not very different from the way he is built. On the Data question I'm not saying that Maddox should've won but that the fact that Picard won is more about it being narratively required rather than "prove that I am conscious" being a good argument. |
|
| |
| ▲ | beeflet 3 days ago | parent | prev [-] | | consciousness is the feeling of self awareness. I suppose you could prove it as much as any other feeling, by observing the way that people behave | | |
| ▲ | selcuka 3 days ago | parent | next [-] | | > I suppose you could prove it as much as any other feeling, by observing the way that people behave Look up the term "philosophical zombie". In a nutshell, you can simulate a conscious being using a non-conscious (zombie) being. It is possible to simulate it so well that an outside observer can't tell the difference. If this is true, then the corollary is that you can't really know if other people are conscious. You can only tell that you are. For all intents and purposes I might be the only one who has consciousness in the universe, and I can't prove otherwise. | | |
| ▲ | zeroonetwothree 3 days ago | parent [-] | | I don't think you are using the phrase "it is possible" correctly. There's certainly no evidence that a philosophical zombie is "possible". I think there are strong arguments that it's not possible. | | |
| ▲ | selcuka 2 days ago | parent [-] | | Well, I could have been clearer, but it was a proposition, hence the "If this is true" in the following sentence. That being said, I don't think those counter arguments really invalidate the philosophical zombie thought experiment. Let's say that it is not possible to simulate a conscious being with 100% accuracy. Does the difference really matter? Does a living organism need consciousness as an evolutionary advantage? Isn't it reasonable to assume that all human beings are conscious just because they all pass the Turing test, even if they are not? |
|
| |
| ▲ | inglor_cz 3 days ago | parent | prev [-] | | A robot can certainly be programmed to behave in a self-aware way, but making a conclusion about its actual self-awareness would be unfounded. In general, behaviorism wasn't a very productive theory in humans and animals either. | | |
| ▲ | beeflet 3 days ago | parent [-] | | By behaving in a self-aware way, it practices self awareness. It would only be unfounded if the robot is programmed in a way that seemingly appears to be self-aware but actually isn't (It would need to occasionally act in a non-self aware way, like a manchurian candidate). But if you keep increasing scrutiny, it converges on being self aware because the best way to appear self-aware is to be self-aware. It's not clear to me what the intrinsic goals of a robot would be if it did practice self-awareness in the first place. But in living things it's to grow and reproduce. |
|
|
|
|
| |
| ▲ | 3 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | tim333 3 days ago | parent | prev [-] | | >NO WAY you can define "consciousness" ... that it includes all humans but excludes all LLMs That doesn't seem so hard - how about awareness of thoughts feelings, emotions and what's going on around you? Fairly close to human consciousness, excludes current LLMs. I don't think it's very relevant to the article though which very sensibly avoids the topic and sticks to thinking. |
|
|
| ▲ | KoolKat23 3 days ago | parent | prev | next [-] |
| 1. Consciousness itself is probably just an illusion, a phenomena/name of something that occurs when you bunch thinking together. Think of this objectively and base it on what we know of the brain. It literally is working off of what hardware we have, there's no magic. 2. That's just a well adapted neural network (I suspect more brain is left than you let on). Multimodal model making the most of its limited compute and whatever gpio it has. 3. Humans navigate a pre-existing map that is already built. We can't understand things in other dimensions and need to abstract this. We're mediocre at computation. I know there's people that like to think humans should always be special. |
| |
| ▲ | adamzwasserman 3 days ago | parent | next [-] | | 1. 'Probably just an illusion' is doing heavy lifting here. Either provide evidence or admit this is speculation. You can't use an unproven claim about consciousness to dismiss concerns about conflating it with text generation. 2. Yes, there are documented cases of people with massive cranial cavities living normal lives. https://x.com/i/status/1728796851456156136. The point isn't that they have 'just enough' brain. it's that massive structural variation doesn't preclude function, which undermines simplistic 'right atomic arrangement = consciousness' claims. 3. You're equivocating. Humans navigate maps built by other humans through language. We also directly interact with physical reality and create new maps from that interaction. LLMs only have access to the maps - they can't taste coffee, stub their toe, or run an experiment. That's the difference. | | |
| ▲ | KoolKat23 3 days ago | parent [-] | | 1. What's your definition of consciousness, let's start there.
2. Absolutely, it's a spectrum. Insects have function.
3. "Humans navigate maps built by other humans through language." You said it yourself. They use this exact same data, so why won't they know it if they used it. Humans are their bodies in the physical world. | | |
| ▲ | adamzwasserman 3 days ago | parent [-] | | 1. I don't need to define consciousness to point out that you're using an unproven claim ('consciousness is probably an illusion') as the foundation of your argument. That's circular reasoning. 2. 'It's a spectrum' doesn't address the point. You claimed LLMs approximate brain function because they have similar architecture. Massive structural variation in biological brains producing similar function undermines that claim. 3. You're still missing it. Humans use language to describe discoveries made through physical interaction. LLMs can only recombine those descriptions. They can't discover that a description is wrong by stubbing their toe or running an experiment. Language is downstream of physical discovery, not a substitute for it | | |
| ▲ | KoolKat23 2 days ago | parent [-] | | 1. You do. You probably have a different version of that and are saying I'm wrong merely for not holding your definition. 2. That directly addresses your point. In abstract it shows they're basically no different to multimodal models, train with different data types and it still works, perhaps even better. They train LLMs with images, videos, sound, and nowadays even robot sensor feedback, with no fundamental changes to the architecture see Gemini 2.5. 3. That's merely an additional input point, give it sensors or have a human relay that data. Your toe is relaying it's sensor information to your brain. |
|
|
| |
| ▲ | estearum 3 days ago | parent | prev | next [-] | | > Consciousness itself is probably just an illusion This is a major cop-out. The very concept of "illusion" implies a consciousness (a thing that can be illuded). I think you've maybe heard that sense of self is an illusion and you're mistakenly applying that to consciousness, which is quite literally the only thing in the universe we can be certain is not an illusion. The existence of one's own consciousness is the only thing they cannot possibly be illuded about (note: the contents of said consciousness are fully up for grabs) | | |
| ▲ | KoolKat23 3 days ago | parent [-] | | I mean peoples perception of it being a thing rather than a set of systems. But if that's your barometer, I'll say models are conscious. They may not have proper agency yet. But they are conscious. |
| |
| ▲ | zeroonetwothree 3 days ago | parent | prev [-] | | Consciousness is an emergent behavior of a model that needs to incorporate its own existence into its predictions (and perhaps to some extent the complex behavior of same-species actors). So whether or not that is an 'illusion' really depends on what you mean by that. | | |
| ▲ | KoolKat23 2 days ago | parent [-] | | My use of the term illusion is more shallow than that, I merely use it as people think it's something separate and special. Based on what you've described the models already demonstrate this, it is implied for example in the models attempts to game tests to ensure survival/release into the wild. |
|
|
|
| ▲ | PaulDavisThe1st 3 days ago | parent | prev | next [-] |
| > Conflates consciousness with "thinking" I don't see it. Got a quote that demonstrates this? |
| |
| ▲ | thechao 3 days ago | parent | next [-] | | I'm not really onboard with the whole LLM's-are-conscious thing. OTOH, I am totally onboard with the whole "homo sapiens exterminated every other intelligent hominid and maybe — just maybe — we're not very nice to other intelligences". So, I try not to let my inborn genetic predisposition to exterminate other intelligence pseudo-hominids color my opinions too much. | | |
| ▲ | adamzwasserman 3 days ago | parent [-] | | It's a dog eat dog world for sure. It does in fact seem that a part of intelligence is using it to compete ruthlessly with other intelligences. |
| |
| ▲ | adamzwasserman 3 days ago | parent | prev [-] | | Exactly. Notable by its absence. |
|
|
| ▲ | nearbuy 3 days ago | parent | prev [-] |
| Can you explain #2? What does the part of the brain that's primarily for balance and motor control tell us about deep learning? |
| |
| ▲ | adamzwasserman 3 days ago | parent [-] | | My mistake thx. I meant "despite having no, or close to no, brain beyond a cerebellum" | | |
| ▲ | nearbuy 3 days ago | parent [-] | | Are there any cases like that? I've never heard of someone functioning normally with little or no brain beyond a cerebellum. | | |
| ▲ | adamzwasserman 3 days ago | parent [-] | | https://npr.org/sections/health-shots/2015/03/16/392789753/a... https://irishtimes.com/news/remarkable-story-of-maths-genius... https://biology.stackexchange.com/questions/64017/what-secti... https://cbc.ca/radio/asithappens/as-it-happens-thursday-edit... | | |
| ▲ | nearbuy 3 days ago | parent [-] | | The first article is about someone missing a cerebellum, not part of their cerebrum. That's the motor and balance part of the brain, and as you might expect, the subject of the article has deficits in motor control and balance. The Biology StackExchange answer just says that frontal lobotomies don't kill you. It doesn't say that lobotomized people function normally. The other two articles are just misreporting on hydrocephalus. This is a condition where fluid build-up compresses the brain tissue, making it appear like a large part of the brain is missing in CT scans. The pressure from the fluid is actually compressing the brain. While it can damage the brain, there is no way to tell from the scans how much, if any, brain matter was destroyed. Hydrocephalus usually causes death or severe deficits, but occasionally it doesn't. Even assuming though that it were all true and people could function normally with little or no brain, that doesn't really tell us anything about LLMs, but rather just uppends all of neuroscience. It would seem to imply the brain isn't doing the thinking and perhaps we have something else like an intangible soul. | | |
| ▲ | adamzwasserman 3 days ago | parent [-] | | You're right - I mixed up cerebellum/cerebrum/cortex terminology. My bad. The cases I'm referencing are hydrocephalus patients with severely compressed cerebral tissue who maintained normal cognitive function. The point about structural variation not precluding consciousness stands. | | |
| ▲ | nearbuy 3 days ago | parent [-] | | Thanks for clearing it up. > The point about structural variation not precluding consciousness stands. Maybe, but my point about high-functioning people with hydrocephalus is that they have the same or similar brain structure (in terms of what exists and how it's connected), just squished gradually over time from fluid pressure. It looks dramatically different in the CT scan, but it's still there, just squished into a different shape. The brain is also plastic and adaptable of course, and this can help compensate for any damage that occurs. But the scans from those articles don't have the level of detail necessary to show neuron death or teach us about the plasticity of the brain. | | |
| ▲ | adamzwasserman 3 days ago | parent [-] | | Fair enough. But the guy walking around with a gigantic caity where everyone else has a brain is food for thought. |
|
|
|
|
|
|
|