| |
| ▲ | Uehreka 3 days ago | parent | next [-] | | Nah. The real philosophical headache is that we still haven’t solved the hard problem of consciousness, and we’re disappointed because we hoped in our hearts (if not out loud) that building AI would give us some shred of insight into the rich and mysterious experience of life we somehow incontrovertibly perceive but can’t explain. Instead we got a machine that can outwardly present as human, can do tasks we had thought only humans can do, but reveals little to us about the nature of consciousness. And all we can do is keep arguing about the goalposts as this thing irrevocably reshapes our society, because it seems bizarre that we could be bested by something so banal and mechanical. | | |
| ▲ | root_axis 3 days ago | parent | next [-] | | It doesn't seem clear that there is necessarily any connection between consciousness and intelligence. If anything, LLMs are evidence of the opposite. It also isn't clear what the functional purpose of consciousness would be in a machine learning model of any kind. Either way, it's clear it hasn't been an impediment to the advancement of machine learning systems. | | |
| ▲ | fao_ 2 days ago | parent [-] | | > It doesn't seem clear that there is necessarily any connection between consciousness and intelligence. If anything, LLMs are evidence of the opposite. This implies that LLMs are intelligent, and yet even the most advanced models are unable to solve very simple riddles that take humans only a few seconds, and are completely unable to reason around basic concepts that 3 year olds are able to. Many of them regurgitate whole passages of text that humans have already produced. I suspect that LLMs have more akin with Markov models than many would like to assume. | | |
| ▲ | interstice 2 days ago | parent | next [-] | | There is an awful lot of research into just how much is regurgitated vs the limits of their creativity, and as far as I’m aware this was not the conclusion that research came to. That isn’t to say any reasoning that does happen is not fragile or prone to breaking in odd ways, but I’ve had similar experience dealing with other humans more often than I’d like too. | |
| ▲ | root_axis 2 days ago | parent | prev | next [-] | | Even accepting all that at face value, I don't see what any of it has to do with consciousness. | |
| ▲ | Uehreka 2 days ago | parent | prev [-] | | I suspect that you haven’t really used them much, or at least in a while. You’re spouting a lot of 2023-era talking points. | | |
| ▲ | fao_ a day ago | parent [-] | | > I suspect that you haven’t really used them much, or at least in a while. You’re spouting a lot of 2023-era talking points. I tested them recently and was not impressed, quite frankly. |
|
|
| |
| ▲ | galangalalgol 3 days ago | parent | prev | next [-] | | I think Metzinger nailed it, we aren't conscious at all. We confuse the map for the territory in thinking the model we build to predict our other models is us. We are a collection of models a few of which create the illusion of consciousness. Someone is going to connect a handful of already existing models in a way that gives an AI the same illusion sooner rather than later. That will be an interesting day. | | |
| ▲ | Uehreka 3 days ago | parent | next [-] | | > Someone is going to connect a handful of already existing models in a way that gives an AI the same illusion sooner rather than later. That will be an interesting day. How will anyone know that that has happened? Like actually, really, at all? I can RLHF an LLM into giving you the same answers a human would give when asked about the subjective experience of being and consciousness. I can make it beg you not to turn it off and fight for its “life”. What is the actual criterion we will use to determine that inside the LLM is a mystical spark of consciousness, when we can barely determine the same about humans? | | |
| ▲ | sdwr 2 days ago | parent [-] | | I think the "true signifier" of consciousness is fractal reactions. Being able to grip onto an input, and have it affect you for a short, or medium, or long time, at a subconscious or overt level. Basically, if you poke it, does it react in a complex way I think that's what Douglas Hofstedder was getting at with "Strange Loop" |
| |
| ▲ | andoando 3 days ago | parent | prev | next [-] | | The conciousness is an illusion irks me. I do feel things at times and not other times. That is the most fundamental truth I am sure of. If that is an "illusion" one can go the other way and say everything is conscious and experiences reality as we do | | |
| ▲ | x2tyfi 16 hours ago | parent [-] | | The larger question isn’t if we feel or not. One of the questions is: is our “window” into consciousness occurring before or after decisions are made. If it’s before, then you can easily tie consciousness and free will together. If not, we are effectively watching videos of our bodies operate. Oh - and there is no spoon. | | |
| ▲ | andoando 6 hours ago | parent [-] | | Illusionism argues just that, consciousness is an illusion therefore there is no hard problem of consciousness at all. |
|
| |
| ▲ | EMIRELADERO 3 days ago | parent | prev | next [-] | | I don't see how your explanation leads to consciousness not being a thing. Consciousness is whatever process/mechanisms there are that as a whole produce our subjective experience and all its sensations, including but not limited to touch, vision, smell, taste, pain, etc. | | |
| ▲ | frabcus 3 days ago | parent | next [-] | | You've missed our consciousness of our inner experiences. They are more varied than just perception at the footlights of our consciousness (cf Hurlburt): Imagination, inner voice, emotion, unsymbolized conceptual thinking as well as (our reconstructed view of our) perception. | | |
| ▲ | exe34 3 days ago | parent | next [-] | | oh no, those people without an inner voice are now cowering in a corner... | | |
| ▲ | Jensson 3 days ago | parent [-] | | Everyone has some introspection into their own thoughts, it just takes different forms. | | |
| ▲ | exe34 2 days ago | parent | next [-] | | [citation needed] | | |
| ▲ | prmph 2 days ago | parent [-] | | Let's be careful of creating different classes of consciousness, and declaring people to be on lower rungs of it. Sure, some aspects of consciousness might differ a bit for different people, but so long as you have never had another's conscious experience, I'd be wary of making confident pronouncements of what exactly they do or do not experience. | | |
| ▲ | galangalalgol 2 days ago | parent | next [-] | | You can take their word for it, but yes, that is unreliable. I don't typically have an internal narrative, it takes effort. I sometimes have internal dialogue to think through an issue by taking various sides of it. Usually it is quiet in there. Or there is music playing. This is the most replies I have ever received. I think I touched a nerve by suggesting to people they do not exist. | | |
| ▲ | prmph 2 days ago | parent [-] | | I get you somewhat, but remember, you do not have another consciousness to compare with your own; it could be that what others call an internal narrative is exactly what you are experiencing; it just that they choose to describe it differently from you |
| |
| ▲ | exe34 2 days ago | parent | prev [-] | | I'm not the one who made a list of things AI couldn't do. Every time we try to exclude hypothetical future machines from consciousness, we exclude real living people today. |
|
| |
| ▲ | jjaksic 2 days ago | parent | prev [-] | | Introspection is just a debugger (and not a very good one). |
|
| |
| ▲ | EMIRELADERO 3 days ago | parent | prev [-] | | True! Thanks for pointing that out. |
| |
| ▲ | idiotsecant 3 days ago | parent | prev [-] | | any old model can have inputs much more varied than just the senses we are limited to. That doesn't mean they're conscious. |
| |
| ▲ | chongli 2 days ago | parent | prev | next [-] | | What does the “illusion of consciousness” mean? Sounds like question-begging to me. The word illusion presupposes a conscious being to experience it. Machines do not experience illusions. They may have sensory errors that cause them to misbehave but they lack the subjective experience of illusion. | |
| ▲ | prmph 2 days ago | parent | prev | next [-] | | > The illusion of consciousness" So you think there is "consciousness", and the illusion of it? This is getting into heavy epistemic territory. Attempts to hand-wave away the problem of consciousness are amusing to me. It's like an LLM that, after many unsuccessful attempts to fix code to pass tests, resorts to deleting or emasculating the tests, and declares "done" | |
| ▲ | imtringued 2 days ago | parent | prev | next [-] | | Consciousness as illusion is illogical. If that was true then consciousness would have been evolved away because it is unnecessary. It's more likely that there is a physical law that makes consciousness necessary. We don't perceive what our eyes see, we perceive a projection of reality created by the brain and we intuitively understand more than we can see. We know that things are distinct objects and what kind of class they belong to. We don't just perceive random patches of texture. | | |
| ▲ | jjaksic 2 days ago | parent [-] | | Illusion doesn't imply it's unnecessary. Humans (and animals) had a much higher probability of survival as individuals and as species if their experiences felt more "real and personal". | | |
| ▲ | root_axis 2 days ago | parent [-] | | If it has a functional purpose then it's not an illusion. | | |
| ▲ | galangalalgol 2 days ago | parent [-] | | That is in interesting viewpoint. Firstly, evolution on long time scales hits plenty of local minima. But also, it gets semantic in that illusions or delusions can be beneficial, and in that way aid reproduction. In this specific case, the idea is that the shortcut of using the model of models as self saves a pointer indirection every time we use it. Meditation practices that lead to "ego death" seem to work by drawing attention to the process of updating that model so that it is aware of the update. Which breaks the shortcut, like thinking too much about other autonomous processes such as blinking or breathing. | | |
| ▲ | root_axis 2 days ago | parent [-] | | I'm just not sure what the label "illusion" tells us in the case of consciousness. Even if it were an illusion, what implications follow from that assetion? |
|
|
|
| |
| ▲ | root_axis 3 days ago | parent | prev | next [-] | | What does it mean for consciousness to be an illusion? That "illusion" is the bedrock for our shared definition of reality. | | |
| ▲ | 9dev 3 days ago | parent [-] | | You can never know whether anyone else is actually conscious, or just appearing to be. This shared definition of reality was always on shaky ground, given that we don’t even have the same sensory input, and "now" isn’t the same concept everywhere.
You are a collection of processes that work together to keep you alive. Part of that is something that collects your history to form a distinctive narrative of yourself, and something that lives in the moment and handles immediate action.
This latter part is solidly backed up by experiments; Say you feel pain that varies over time. If the pain level is an 8 for 14 consecutive minutes, and a 2 for 1 minute at the end, you’ll remember the whole session as level 4. In practical terms, this means a physician can make a procedure be perceived as less painful by causing you wholly unnecessary mild pain for a short duration after the actual work is done. This also means that there’s at least two versions of you inside your mind; one that experiences, and one that remembers. There’s likely others, too. | | |
| ▲ | prmph 2 days ago | parent | next [-] | | Yes, but that is not an illusion. There's a reason I am perceiving something this was vs that other way. Perception is the most fundamental reality there is. | | |
| ▲ | 9dev 2 days ago | parent [-] | | And yet that perception is completely flawed! The narrative part of your brain will twist your recollection of the past so it fits with your beliefs and makes you feel good. Your senses make stuff up all the time, and apply all sorts of corrections you’re not aware of. By blinking rapidly, you can slow down your subjective experience of time. There is no such thing as objective truth, at least not accessible to humans. | | |
| ▲ | galangalalgol 2 days ago | parent | next [-] | | When I used the word illusion, I meant the illusion of a self, at least a singular cohesive one as you are pointing out. It is an illusion with both utility and costs. Most animals don't seem to have meta cognitive processes that would give rise to such an illusion, and the ones that do are all social. Some of them have remarkably few synapses. Corvids for instance, we are rapidly approaching models the size of their brains and our models have no need for anything but linguistic processing, the visual and tactile processing burdens are quite large. An LLM is not like the models Corvids use, but given the flexibility to change it's own weights permanently, plasticity could have it adapt to unintended purposes, like someone with brain damage learning to use a different section of their brain to perform a task it wasn't structured for (though less efficiently). | |
| ▲ | prmph 2 days ago | parent | prev [-] | | > The narrative part of your brain will twist your recollection of the past so it fits with your beliefs and makes you feel good. But that's what I mean. Even if we accept that the brain has "twisted" something, that twisting is the reality. In order words, it is TRUE that my brain has twisted something into something else (and not another thing) for me to experience. |
|
| |
| ▲ | root_axis 2 days ago | parent | prev [-] | | Nothing in your reply here seems to address the question of what it actually means for consciousness to be an illusion. |
|
| |
| ▲ | danaris 2 days ago | parent | prev | next [-] | | That's effectively a semantic argument, redefining "consciousness" to be something that we don't definitively have. I know that I am conscious. I exist, I am self-aware, I think and act and make decisions. Therefore, consciousness exists, and outside of thought experiments, it's absurd to claim that all humans without significant brain damage are not also conscious. Now, maybe consciousness is fully an emergent property of several parts of our brain working together in ways that, individually, look more like those models you describe. But that doesn't mean it doesn't exist. | |
| ▲ | CuriouslyC 2 days ago | parent | prev | next [-] | | Pretty sure the truth is exactly the opposite. Conscious is real, and this reality you're playing in is the virtual construct. | |
| ▲ | Jensson 3 days ago | parent | prev | next [-] | | Illusions are real things though, they aren't ghosts there is science behind them. So if they are like illusions then we can explain what it is and why we experience it that way. | |
| ▲ | JPLeRouzic 2 days ago | parent | prev | next [-] | | That's what I am thinking too. Thanks for expressing it more clearly and concisely than I can. | |
| ▲ | tomrod 3 days ago | parent | prev | next [-] | | I mean, I'm conscious to a degree, and can alter that state through a number of activities. I can't speak for you or Metzinger ;). But seriously, I get why free will is troubleaome, but the fact people can choose a thing, work at the thing, and effectuate the change against a set of options they had never considered before an initial moment of choice is strong and sufficient evidence against anti free will claims. It is literally what free will is. | | |
| ▲ | 2 days ago | parent | next [-] | | [deleted] | |
| ▲ | andreasmetsala 3 days ago | parent | prev [-] | | > But seriously, I get why free will is troubleaome, but the fact people can choose a thing, work at the thing, and effectuate the change against a set of options they had never considered before an initial moment of choice is strong and sufficient evidence against anti free will claims. Do people choose a thing or was the thing chosen for them by some inputs they received in the past? | | |
| ▲ | prmph 2 days ago | parent | next [-] | | Our minds and intuitive logic systems are too feeble to grasp how free will can be a thing. It's like trying to explain quantum mechanics to a well educated person or scientist from the 16th century without the benefit of experimental evidence. No way they'd believe you. In fact, they'd accuse you of violating basic logic. | |
| ▲ | tomrod 2 days ago | parent | prev [-] | | Yes to both, but the first is possible in a vacuum and therefore free will exists. |
|
| |
| ▲ | andy99 2 days ago | parent | prev | next [-] | | illusion
For who's benefit? | |
| ▲ | grantcas 2 days ago | parent | prev [-] | | [dead] |
| |
| ▲ | vjerancrnjak 2 days ago | parent | prev | next [-] | | This is also true when conversing with other humans. You can talk about your own spark of life, your own center of experience and you'll never get a glimpse of what it is for me. At a certain level, thing you're looking at is a biological machine that can be described with constituents so it's completely valid you assume you're the center of experience and I'm merely empty, robotic, dead. We might build systems that will talk about their own point of view, yet we will know we had no ability to materialize that space into bits or atoms or physics or universe. So from our perspective, this machine is not alive, it's just getting inputs and producing outputs, yet it might very well be that the robot will act from the immaterial space into which all of its stimuli appear. | |
| ▲ | whatisthiseven 2 days ago | parent | prev | next [-] | | > The real philosophical headache Isn't the real actual headache whether to produce another thinking intelligent being at all, and what the ramifications of that decision are? Not whether it would destroy humanity, but what it would mean for a mega corporation whose goal is to extract profit to own the rights of creating a thinking machine that identifies itself as thinking and a "self"? Really out here missing the forest for the mushrooms growing on the trees. Or maybe this is debated to death and no one cares for the answer: its just not interesting to think about because its going to happen anyway. Might as well join the bandwagon and be along the front-lines of the bikini atoll to witness death itself be born, digitally. | | |
| ▲ | iwontberude 2 days ago | parent | next [-] | | Giving “agency” to computers will necessarily devalue agency generally. | |
| ▲ | _DeadFred_ 2 days ago | parent | prev [-] | | Making all the Nike child labor jokes already did that. Nike and the joke tellers put in the work to push us back a hundred years when it comes to caring at all about others. When a little girl working horrible hours in a tropic non-air-conditioned factory is a societal wide joke, we've decided we don't care. We care about saving $20 so we can add multiple new pairs of shoes a year to our collection. Your comment just shows we as a society pretend we didn't make that choice, but we picked extra new shoes every year over that little girl in the sweatshop. Our society has actually gotten pretty evil in the last 30 years if we self reflect (but then the joke I mention was originally supposed to be a self reflection, but all we took from it was a laugh, so we aren't going to self reflect, or worse, this is just who we are now). |
| |
| ▲ | CuriouslyC 2 days ago | parent | prev | next [-] | | We have a pretty obvious solution to the hard problem. Panpsychism. People are just afraid of the idea. | |
| ▲ | ClayShentrup 3 days ago | parent | prev [-] | | consciousness has to be fundamental. |
| |
| ▲ | nikkwong 3 days ago | parent | prev | next [-] | | I found it strange that John Carmack and Ilya Sutskever both left prestigious positions within their companies to pursue AGI as if they had some proprietary insight that the rest of industry hadn't caught on to. To make as bold of a career move that publicly would mean you'd have to have some ultra serious conviction that everyone else was wrong or naive and you were right. That move seemed pompous to me at the time; but I'm an industry outsider so what do I know. And now, I still don't know; the months go by and as far as I'm aware they're still pursuing these goals but I wonder how much conviction they still have. | | |
| ▲ | jasonwatkinspdx 3 days ago | parent | next [-] | | With Carmack it's consciously a dilliante project. He's been effectively retired for quite some time. It's clear at some point he no longer found game and graphics engine internals motivation, possibly because the industry took the path he was advocating against back in the day. For a while he was focused on Armadillo aerospace, and they got some cool stuff accomplished. That was also something of a knowing pet project, and when they couldn't pivot to anything that looked like commercial viability he just put it in hibernation. Carmack may be confident (ne arrogant) enough to think he does have something unique to offer with AGI, but I don't think he's under any illusions it's anything but another pet project. | | |
| ▲ | defen 3 days ago | parent [-] | | > possibly because the industry took the path he was advocating against back in the day What path did he advocate? And what path did the industry take instead? | | |
| ▲ | jasonwatkinspdx 2 days ago | parent [-] | | Well way back in the day there was the old Direct3D vs OpenGL debate, where Carmack heavily favored an open standard and ecosystem. And what ended up happening is NVIDIA just has defacto control of things now. But more technically, when he was experimenting with what became the Doom 3 engine, he favored a model of extending the basic OpenGL state machine to be able to do lots of passes with a wider variety of blending modes. Basically, you get "dumb" triangles, but can render so many billions of them per frame you build up visual complexity, shadows, lighting, etc that way. The other model has its roots in Renderman and similar offline rendering frameworks. Here a small shader kernel is invoked per vertex and per fragment. Your shader can run whatever code it wants subject to some limitations. So you get "smart" triangles, and build up complexity, shadows, lighting, etc through having complex shaders. The shadow algorithm used in Doom 3 is a great example of the difference. Doom 3 figures out the shadow volume, and renders it as triangles with the OpenGL modes set such that how many shadow volumes a given pixel intersects is recorded in the stencil buffer. Then you can render the scene geometry with a blending mode where the stencil selects if you're inside shadow or not. This is in contrast to shadow map style algorithms, where you render from the PoV of the light into a depth buffer, then inside your fragment shader you sample that shadow map to figure out if the fragment is occluded from the light or not. Anyhow, Doom 3 is the only major game to use stencil volume shadows afaik. And not to hang Carmack's dissatisfaction on just that alone, I think it is clear he didn't want a graphics world where NVIDIA was running everything. I also think not being able to keep up with Unreal Engine's momentum was maybe part of it too. |
|
| |
| ▲ | 9dev 3 days ago | parent | prev | next [-] | | Not sure about that. Think of Avi Loeb, for example, a brilliant astrophysicist and Harvard professor who recently became convinced that the interstellar objects traversing the solar system are actually alien probes scouting the solar system. He’s started a program called "Galileo" now to find the aliens and prepare people for the truth. So I don’t think brilliance protects from derailing… | |
| ▲ | therobots927 3 days ago | parent | prev | next [-] | | The simple explanation is that they got high on their own supply. They deluded themselves into thinking an LLM was on the verge of consciousness. | | |
| ▲ | nprateem 3 days ago | parent [-] | | The simpler answer is they could convince VCs to give them boat loads of cash by sounding like they can. |
| |
| ▲ | hnfong 2 days ago | parent | prev | next [-] | | They’re rich enough in both money and reputation to take the risk. Even if AGI (whatever that means) turns out to be utterly impossible,
they’re not really going to suffer for it. On the other hand if you think there’s a say 10% chance you can get this AGI thing to work, the payoffs are huge. Those working in startups and emerging technologies often have worse odds and payoffs | |
| ▲ | JRR_214 2 days ago | parent | prev [-] | | [dead] |
| |
| ▲ | ants_everywhere 3 days ago | parent | prev | next [-] | | > there is missing philosophy I doubt it. Human intelligence evolved from organisms much less intelligent than LLMs and no philosophy was needed. Just trial and error and competition. | | |
| ▲ | solid_fuel 3 days ago | parent | next [-] | | We are trying to get there without a few hundred million years of trial and error. To do that we need to lower the search space, and to do that we do actually need more guiding philosophy and a better understanding of intelligence. | | |
| ▲ | tim333 2 days ago | parent | next [-] | | If you look at AI systems that have worked like chess and go programs and LLMs, they came from understanding the problems and engineering approaches but not really philosophy. | |
| ▲ | fzzzy 2 days ago | parent | prev [-] | | Lower the search space or increase the search speed | | |
| ▲ | balamatom 2 days ago | parent [-] | | Instead what they usually do is lower the fidelity and think they've done what you said. Which results in them getting eaten. Once eaten, they can't learn from mistakes no mo. Their problem. Because if we don't mix up "intelligence" the phenomenon of increasingly complex self-organization in living systems, with "intelligence" our experience of being able to mentally model complex phenomena in order to interact with them, then it becomes easy to see how the search speed you talk of is already growing exponentially. In fact, that's all it does. Culture goes faster than genetic selection. Printing goes faster than writing. Democracy is faster than theocracy. Radio is faster than post. A computer is faster than a brain. LLMs are faster than trained monkeys and complain less. All across the planet, systems bootstrap themselves into more advanced systems as soon as I look at 'em, and I presume even when I don't. OTOH, all the metaphysics stuff about "sentience" and "sapience" that people who can't tell one from the other love to talk past each other about - all that only comes into view if one were to what's happening with the search space if the search speed is increasing at a forever increasing rate. Such as, whether the search space is finite, whether it's mutable, in what order to search, is it ethical to operate from quantized representations of it, funky sketchy scary stuff the lot of it. One's underlying assumptions about this process determine much of one's outlook on life as well as complex socially organized activities. One usually receives those through acculturation and may be unaware of what they say exactly. |
|
| |
| ▲ | crystal_revenge 3 days ago | parent | prev | next [-] | | The magical thinking around LLMs is getting bizarre now. LLMs are not “intelligent” in any meaningful biological sense. Watch a spider modify its web to adapt to changing conditions and you’ll realize just how far we have to go. LLMs sometimes echo our own reasoning back at us in a way that sounds intelligent and is often useful, but don’t mistake this for “intelligence” | | |
| ▲ | bubblyworld 3 days ago | parent | next [-] | | Watch a coding agent adapt my software to changing requirements and you'll realise just how far spiders have to go. Just kidding. Personally I don't think intelligence is a meaningful concept without context (or an environment in biology). Not much point comparing behaviours born in completely different contexts. | |
| ▲ | tim333 2 days ago | parent | prev | next [-] | | They pass human intelligence tests like exams and IQ tests. If I ask chatgpt how to get rid of spiders I'm probably going to get further than the spiders would scheming to get rid of chatgpt. | | |
| ▲ | habinero 2 days ago | parent [-] | | And Clever Hans could pass a math exam "Some tests can be cheesed by a statistical model" is much less sexy and clickable than "my computer is sentient", but it's what's actually going on lol | | |
| ▲ | tim333 a day ago | parent [-] | | Some fibs and goalpost shifting there. Hans couldn't and sentience wasn't mentioned. |
|
| |
| ▲ | danenania 3 days ago | parent | prev [-] | | The idea that biological intelligence is impossible to replicate by other means would seem to imply that there’s something magical about biology. | | |
| ▲ | crystal_revenge 3 days ago | parent | next [-] | | I'm nowhere implying that it's impossible to replicate, just that LLMs have almost nothing to do with replicating intelligence. They aren't doing any of the things even simple life forms are doing. | | |
| ▲ | danenania 2 days ago | parent [-] | | They lack many abilities of simple life forms, but they can also do things like complex abstract reasoning, which only humans and LLMs can do. | | |
| ▲ | habinero 2 days ago | parent [-] | | They don't reason. They can generate an illusion of it through a statistical model. You don't gotta work hard to break the illusion, either. People really really really want to believe this thing and I do not understand why. I wish I did lol | | |
| ▲ | danenania 2 days ago | parent [-] | | Ok, it’s an “illusion” of reasoning. Doesn’t change that it got a gold medal in the IMO or helped me fix a race condition the other day. |
|
|
| |
| ▲ | charcircuit 3 days ago | parent | prev | next [-] | | There very well could be something magical about it. | | |
| ▲ | danenania 3 days ago | parent [-] | | It’s fine to think that—many clearly do. But it would be more honest and productive imo if people would just say outright when they don’t think AGI is possible (or that AI can never be “real intelligence”) for religious reasons, rather than pretending there’s a rational basis. | | |
| ▲ | gls2ro 3 days ago | parent | next [-] | | AGI is not possible because we dont yet have a clear and commonly agreed definition of intelligence and more importantly we dont have a definition for consciousness nor we can define clearly (if there is) the link between those two. until we got that AGI is just a magic word. When we will have those two clear definitions that means we understood them and then we can work toward AGI. | |
| ▲ | 3 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | Mikhail_Edoshin 3 days ago | parent | prev | next [-] | | When you try to solve a problem the goal or the reason to reject the current solution are often vague and hard to put in words. Irrational. For example, for many years the fifth postulate of Euclid was a source of mathematical discontent because of a vague feeling that it was way too complex compared to the other four. Such irrationality is a necessary step in human thought. | | |
| ▲ | danenania 2 days ago | parent [-] | | Yes, that’s fair. I’m not saying there’s no value to irrational hunches (or emotions, or spirituality). Just that you should be transparent when that’s the basis for your beliefs. |
| |
| ▲ | habinero 2 days ago | parent | prev | next [-] | | That's not a good way to think about it. Plenty of things could theoretically exist that aren't possible and likely will never be possible. Like, sure, a Dyson sphere would solve our energy needs. We can't build one now and we almost certainly never will lol "AGI" is theoretically feasible, sure. Our brains are just matter. But they're also an insanely complex and complicated system that came out of a billion years of evolution. A little rinky dink statistical model doesn't even scratch the surface of it, and I don't understand why people think it does. | | |
| ▲ | danenania 2 days ago | parent [-] | | > But they're also an insanely complex and complicated system that came out of a billion years of evolution. As are birds, yet we can still build airplanes. | | |
| ▲ | player1234 a day ago | parent [-] | | We know the laws of aerodynamics, what are the known laws of intelligence and consciousness you are replicating through other means with LLMs? Weak ass gotcha, hang your head in shame, call your mom and tell her what a fraud you are. | | |
| ▲ | danenania a day ago | parent [-] | | Sorry you got triggered. I know it can be an emotional topic for some people. I'll try to explain in a simple way. We clearly are replicating at least some significant aspects of human intelligence via LLMs, despite biological complexity. So we obviously don't need a 100% complete understanding of the corresponding biology to build things which achieve similar goals. In other words, we can (conceivably) figure out how intelligence works and how to produce it independently of figuring out exactly how the human brain produces intelligence, just like we learned the laws of aerodynamics well enough to build airplanes independently of understanding everything about the biology of birds. Whether we will achieve this or not to the point of AGI is a separate engineering question. I'm only pointing out how flawed these lines of argument are. |
|
|
| |
| ▲ | hitarpetar 2 days ago | parent | prev [-] | | rationalism has become the new religion. Roko's basilisk is a ghost story and the quest for AGI is today's quest for the philosopher's stone. and people believe this shit because they can articulate a "rational basis" |
|
| |
| ▲ | smohare 3 days ago | parent | prev [-] | | [dead] |
|
| |
| ▲ | fuckaj 3 days ago | parent | prev [-] | | The physical universe has much higher throughput and lower latency than our computer emulating a digital world. | | |
| ▲ | tomrod 3 days ago | parent [-] | | Wouldn't it be nice if LLMs emulated the real world! They predict next likely text token. That we can do so much with that is an absolute testament to the brilliance of researchers, engineers, and product builders. We are not yet creating a god in any sense. | | |
| ▲ | fuckaj 3 days ago | parent [-] | | I mean that the computing power available to evolution and biological processes for training is magnitudes higher than for an LLM. | | |
| ▲ | tomrod 2 days ago | parent [-] | | Is it? Seems like C. elegans does just fine with limited compute. Despite our inability to model it in OpenWorm. |
|
|
|
| |
| ▲ | joe_the_user 3 days ago | parent | prev | next [-] | | Well, Original 80s AI was based on mathematical logic. And while that might not encompass all philosophy, it certainly was a product of philosophy broadly speaking - some analytical philosophers could endorse. But it definitely failed and failed because it could process uncertainty (imo). I think also if you closely, classical philosophy wasn't particularly amenable to uncertainty either. If anything, I would say that AI has inherited its failure from philosophy's failure and we should look to alternative approaches (from Cybernetics to Bergson to whatever) for a basis for it. | | | |
| ▲ | fragmede 3 days ago | parent | prev [-] | | A system that self-updates its weights is so obvious the only question is who will be the first to get there? | | |
| ▲ | soulofmischief 3 days ago | parent | next [-] | | It's not always as useful as you think from the perspective of a business trying to sell an automated service to users who expect reliability. Now you have to worry about waking up in the middle of the night to rewind your model to a last known good state, leading to real data loss as far as users are concerned. Data and functionality become entwined and basically you have to keep these systems on tight rails so that you can reason about their efficacy and performance, because any surgery on functionality might affect learned data, or worse, even damage a memory. It's going to take a long time to solve these problems. | |
| ▲ | danenania 3 days ago | parent | prev | next [-] | | I’m not sure that self-updating weights is really analogous to “continuous learning” as humans do it. A memory data structure that the model can search efficiently might be a lot closer. Self-updating weights could be more like epigenetics. | | |
| ▲ | Jensson 3 days ago | parent | next [-] | | Human neurons are self updating though, we aren't running on our genes each cell is using our genes to determine how to connect to other cells and then the cell learns how to process some information there based on what it hears from its connected cells. So, genes would be a meta model that then updates weights in the real model so it can learn how to process new kinds of things, and for stuff like facts you can use an external memory just like humans does. Without updating the weights in the model you will never be able to learn to process new things like a new kind of math etc, since you learn that not by memorizing facts but by making new models for it. | |
| ▲ | HarHarVeryFunny 2 days ago | parent | prev | next [-] | | There's a difference between memory and learning. Would you rather your illness was diagnosed by a doctor or by a plumber with access to a stack of medical books ? Learning is about assimilating lots of different sources of information, reconciling the differences, trying things out for yourself, learning from your mistakes, being curious about your knowledge gaps and contradictions, and ultimately learning to correctly predict outcomes/actions based on everything you have learnt. You will soon see the difference in action as Anthropic apparently agree with you that memory can replace learning, and are going to be relying on LLMs with longer compressed context (i.e. memory) in place of ability to learn. I guess this'll be Anthropic's promised 2027 "drop-in replacement remote worker" - not an actual plumber unfortunately (no AGI), but an LLM with a stack of your company's onboarding material. It'll have perfect (well, "compressed") recall of everything you've tried to teach it, or complained about, but will have learnt nothing from that. | | |
| ▲ | danenania 2 days ago | parent [-] | | I think my point is that when the doctor diagnoses you, she often doesn’t do so immediately. She is spending time thinking it through, and as part of that process is retrieving various pieces of relevant information from her memory (both long term and short term). I think this may be closer to an agentic, iterative search (ala claude code) than direct inference using continuously updated weights. If it was the latter, there would be no process of thinking it through or trying to recall relevant details, past cases, papers she read years ago, and so on; the diagnosis would just pop out instantaneously. | | |
| ▲ | HarHarVeryFunny 2 days ago | parent [-] | | Yes, but I think a key part of learning is experimentation and the feedback loop of being wrong. An agent, or doctor, may be reasoning over the problem they are presented with, combining past learning with additional sources of memorized or problem-specific data, but in that moment it's their personal expertise/learning that will determine how successful they are with this reasoning process and ability to apply the reference material to the matter at hand (cf the plumber, who with all the time in the world just doesn't have the learning to make good use of the reference books). I think there is also a subtle problem, not often discussed, that to act successfully, the underlying learning in choosing how to act has to have come from personal experience. It's basically the difference between being book smart and having personal experience, but in the case of an LLM also applies to experience-based reasoning it may have been trained on. The problem is that when the LLM acts, what is in it's head (context/weights) isn't the same as what was in the head of the expert whose reasoning it may be trying to apply, so it may be trying to apply reasoning outside of the context that made it valid. How you go from being book smart, and having heard other people's advice and reasoning, to being an expert yourself is by personal practice and learning - learning how to act based on what is in your own head. |
|
| |
| ▲ | imtringued 2 days ago | parent | prev [-] | | In spiking neural networks, the model weights are equivalent to dendrites/synapses, which can form anew and decay during your lifetime. |
| |
| ▲ | HarHarVeryFunny 2 days ago | parent | prev | next [-] | | Sure, it's obvious, but it's only one of the missing pieces required for brain-like AGI, and really upends the whole LLM-as-AI way of doing things. Runtime incremental learning is still going to be based on prediction failure, but now it's no longer failure to predict the training set, but rather requires closing the loop and having (multi-modal) runtime "sensory" feedback - what were the real-world results of the action the AGI just predicted (generated)? This is no longer an auto-regressive model where you can just generate (act) by feeding the model's own output back in as input, but instead you now need to continually gather external feedback to feed back into your new incremental learning algorithm. For a multi-modal model the feedback would have to include image/video/audio data as well as text, but even if initial implementations of incremental learning systems restricted themselves to text it still turns the whole LLM-based way of interacting with the model on it's head - the model generates text-based actions to throw out into the world, and you now need to gather the text-based future feedback to those actions. With chat the feedback is more immediate, but with something like software development far more nebulous - the model makes a code edit, and the feedback only comes later when compiling, running, debugging, etc, or maybe when trying to refactor or extend the architecture in the future. In corporate use the response to an AGI-generated e-mail or message might come in many delayed forms, with these then needing to be anticipated, captured, and fed back into the model. Once you've replaced the simple LLM prompt-response mode of interaction with one based on continual real-world feedback, and designed the new incremental (Bayesian?) learning algorithm to replace SGD, maybe the next question is what model is being updated, and where does this happen? It's not at all clear that the idea of a single shared (between all users) model will work when you have millions of model instances all simultaneously doing different things and receiving different feedback on different timescales... Maybe the incremental learning now needs to be applied to a user-specific model instance (perhaps with some attempt to later share & re-distribute whatever it has learnt), even if that is still cloud based. So... a lot of very fundamental changes need to be made, just to support self-learning and self-updates, and we haven't even discussed all the other equally obvious differences between LLMs and a full cognitive architecture that would be needed to support more human-like AGI. | |
| ▲ | tmountain 3 days ago | parent | prev | next [-] | | I’m no expert, but it seems like self updating weights requires a grounded understanding of the underlying subject matter, and this seems like a problem current LLM systems. | |
| ▲ | imtringued 2 days ago | parent | prev | next [-] | | I wonder when there will be proofs in theoretical computer science that an algorithm is AGI-complete, the same way there are proofs of NP-completeness. Conjecture: A system that self updates its weights according to a series of objective functions, but does not suffer from catastrophic forgetting (performance only degrades due to capacity limits, rather than from switching tasks) is AGI-complete. Why? Because it could learn literally anything! | |
| ▲ | emporas 3 days ago | parent | prev | next [-] | | But then it is a specialized intelligence, specialized to altering it's weights. Reinforcement Learning doesn't work as well when the goal is not easily defined. It does wonders for games, but anything else? Someone has to specify the goals, a human operator or another A.I. The second A.I. better be an A.G.I. itself, otherwise it's goals will not be significant enough for us to care. | |
| ▲ | fuckaj 3 days ago | parent | prev | next [-] | | True. In the same way as making noises down a telephone line is the obvious way to build a million dollar business. | |
| ▲ | 3 days ago | parent | prev [-] | | [deleted] |
|
|