| ▲ | I'm scared about biological computing(kuber.studio) |
| 134 points by kuberwastaken 9 hours ago | 116 comments |
| |
|
| ▲ | pjs_ 6 hours ago | parent | next [-] |
| Be careful about how you interpret that paper. It looks really impressive -- real neurons in a petri dish seem to successfully (if amateurishly) murk a few imps. https://www.youtube.com/watch?v=yRV8fSw6HaE But there's more to the setup than you might assume from a casual reading. Here's the code used for that demo: https://github.com/SeanCole02/doom-neuron So there is an entire pytorch stack wrapped around the mysterious little blob of neurons -- they aren't just wired straight into WASD. There is a conventional convnet-based encoder, running on a GPU, in the critical path. The README tries to argue that the "neurons are doing the learning" but to my dilettante, critical eye it really looks as though there is a hell of a lot of learning happening in the convnet also. Are the neurons learning to play doom, or are they learning to inject ever so slightly more effective noise into the critical path? Would this work just as well if we replaced the neurons with some other non-markovian sludge? The authors do ablation experiments to try to get to the bottom of this but I can't really tell how compelling the results are (due to my own ignorance/stupidity of course) |
| |
| ▲ | Retr0id an hour ago | parent | next [-] | | This reminds me of https://news.ycombinator.com/item?id=47897647, where a quantum computing demo worked equally well if you replaced the QC with an entropy source. | |
| ▲ | croemer 6 hours ago | parent | prev | next [-] | | Someone should try to replace the neurons with urand and see if the chip can still play Doom, in the spirit of the qday prize winner. | | |
| ▲ | amelius an hour ago | parent [-] | | Reminds me of the ship of theseus philosophical experiment where they replace neurons by logic gates one by one and ask when exactly consciousness stops existing. |
| |
| ▲ | rf15 4 hours ago | parent | prev [-] | | > but to my dilettante, critical eye it really looks as though there is a hell of a lot of learning happening in the convnet also. Yeah it feels like they constructed the conclusion and worked backwards from there. I'm not seeing how their claim has much merit. |
|
|
| ▲ | philips 7 hours ago | parent | prev | next [-] |
| I think this is the same ethical questions of veganism and our use/abuse of biological systems. This is an excerpt from "The Pig that Wants to be Eaten" by Julian Baggini > After forty years of vegetarianism, Max Berger was about to sit down to a feast of pork sausages, crispy bacon and pan-fried chicken breast. Max had always missed the taste of meat, but his principles were stronger than his culinary cravings. But now he was able to eat meat with a clear conscience. > The sausages and bacon had come from a pig called Priscilla he had met the week before. The pig had been genetically engineered to be able to speak and, more importantly, to want to be eaten. Ending up on a human’s table was Priscilla’s lifetime ambition and she woke up on the day of her slaughter with a keen sense of anticipation. She had told all this to Max just before rushing off to the comfortable and humane slaughterhouse. Having heard her story, Max thought it would be disrespectful not to eat her. > The chicken had come from a genetically modified bird which had been ‘decerebrated’. In other words, it lived the life of a vegetable, with no awareness of self, environment, pain or pleasure. Killing it was therefore no more barbarous than uprooting a carrot. > Yet as the plate was placed before him, Max felt a twinge of nausea. Was this just a reflex reaction, caused by a lifetime of vegetarianism? Or was it the physical sign of a justifiable psychic distress? Collecting himself, he picked up his knife and fork . . . > Source: The Restaurant at the End of the Universe by Douglas Adams (Pan Books, 1980) |
| |
| ▲ | dasyatidprime 6 hours ago | parent | next [-] | | What is the source line at the end representing there? I've read The Restaurant at the End of the Universe, and while it definitely contains (and I see it as a major cultural anchor for) animals bred to desire being eaten and be able to say so, it doesn't contain that particular scene (at least in the version I read). Is that line Baggini noting that his scene was inspired by the Adams book? | | |
| ▲ | philips 6 hours ago | parent [-] | | Baggini is the source of the quote he just references the concept was from Adams at the end. I copy/pasted this from the book. |
| |
| ▲ | vjvjvjvjghv 5 hours ago | parent | prev | next [-] | | Did Priscilla also want to be living in absolute misery every single day of her life? The way animals are treated while they are alive is my main objection to our farming practices and the reason why i don’t eat meat. | | |
| ▲ | JK-Swizzle 4 hours ago | parent | next [-] | | I believe you are missing the forest for the trees. It is bringing up the question of what defines self will. It is unrelated to veganism in all but text. An easy example is dogs. We have bred dogs for centuries to love doing work for us. If they hated doing the work, it would be easy to call it cruel. If they loved it by nature, it would be easy to call it kind. But since we created them into a thing that loves the work we need them for, where do the ethics fall? Should we prevent them from doing what brings them joy? Should we make use of this win-win situation? If it is the latter, we are quickly approaching the ability to morph every species into something that gets joy from doing our work. Dogs we changed by accident. The next one will not be an accident. Is it still a beings free will if the game was rigged from the start? | | |
| ▲ | protocolture an hour ago | parent | next [-] | | Depends on the dog tbh. Keshonds are bred to yell at anyone getting on your barge. A lot of humans would probably like that job if it paid enough. Just chilling out and yelling at anyone you dont recognise. | |
| ▲ | the_af 4 hours ago | parent | prev [-] | | > Dogs we changed by accident (I know your point wasn't about dogs either, it just reminded me of something). I love Neil de Grasse Tyson's line in Cosmos: A Spacetime Odyssey: "This wolf has discovered what a branch of its ancestors figured out some 15.000 years ago... an excellent survival strategy: the domestication of humans." | | |
| ▲ | oooyay an hour ago | parent | next [-] | | There's also another animal/dog documentary that I've watched recently that puts a finer point on this realization. The secret to survival and evolution is cooperation. For instance, not all dogs evolved the same way in this documentary. Some were more nuturing, some were more problem solving. For the focus of the documentary the challenge was to match the dog with a human that had a need they could address. I think somewhat egotistically humans underappreciate how we have also been goaded by our "pets" into our own evolutionary journey. Most of the subjects of that documentary would not be alive if it were not for those dogs. | |
| ▲ | rcxdude 31 minutes ago | parent | prev | next [-] | | It's much like how many plants have accidentally found that a great means of propagation is to produce a compound that is both a great chemical warfare agent against other plants and microbes and also tastes interesting to humans or makes them feel funny. | |
| ▲ | moondance an hour ago | parent | prev [-] | | An amusing quip, but since you brought Neil up- his takes on veganism are generally disappointing and facile. |
|
| |
| ▲ | krater23 3 hours ago | parent | prev [-] | | Just thing about, whats better, being treated the way some of the animals treated or being locked up in a server room all your life, seeing only doom dungeons and run to not get killed ingame? I would be happy to be the animal when I need to choose between Priscilla and the brain tissue in a biological computer. |
| |
| ▲ | kjkjadksj 19 minutes ago | parent | prev | next [-] | | At the end of the day vegans play the same game as meat eaters where some line is drawn. For meat eaters it is with livestock meats and for pescatarians that is no go, but fish are alright. And for vegans that is all off limits. Except of course the life we deem base enough to not care it is being eaten alive. Slaughter all the lettuce you want. There are no lettuce advocates. All this to say the moral arguments are sort of silly and illogical. Unfortunately for us all, we exist where we do in the food chain, having to consume life to live, unable to secure our resources from the sun and inorganic resources which would be more morally righteous by all measures. Things could be better but they also could be worse. At least much of our prey receives veterinary care and is killed via airgun vs having to rough it and be eaten alive. | | |
| ▲ | 0dayz 12 minutes ago | parent [-] | | AFAIK vegans base their argument on the degree of consciousness a living being had and compromise on the least evil. Most meat eaters base it on closeness to said living thing. It'll be interesting to see if the veganism movement survives lab grown meat that is ethically produced. |
| |
| ▲ | boogieknite 5 hours ago | parent | prev | next [-] | | as an unintentional and perhaps unethical vegetarian of many years who hasn't read this book: eating dead things gives me the creeps because it makes me consider my own death and consumption which is unappetizing | | | |
| ▲ | Centigonal 6 hours ago | parent | prev [-] | | is this from Baggini or Adams? | | |
| ▲ | philips 6 hours ago | parent [-] | | Baggini is the source of the quote he just references the concept was from Adams at the end. I copy/pasted this from the book. |
|
|
|
| ▲ | Imnimo 4 hours ago | parent | prev | next [-] |
| >But this is where the line slightly blurs in my head. Did we possibly just build the first human biocomputer and immediately put it in a simulated hell, playing the same game on loop, forever? Using the same reward mechanisms we use for LLMs? This description does not seem to really match what was done in the Doom demo, and makes me skeptical that the author has actually looked into the details. |
| |
| ▲ | robot-wrangler 3 hours ago | parent [-] | | > skeptical that the author has actually looked into the details. Nevermind the experiment.. same deal for a lot of people who are only interested enough to offer opinions about consciousness and theory-of-mind without doing any of the boring background reading. The bottom line in TFA is maybe just about unapologetic carbon-chauvinism. But although OP has "been in the AI space since ChatGPT first dropped" and "bothered by this for months", they don't seem aware of terms or the usual problems with this position. Your average non-technical scifi reader has a more nuanced take than AI bros puffing up blogs for linked-in traffic |
|
|
| ▲ | slibhb 5 hours ago | parent | prev | next [-] |
| I read an interesting book about consciousness recently: The Hidden Spring by Mark Solms. Solms argues, I think convincingly, that consciousness fundamentally has to do with emotions and not cognition. Consciousness is not produced by the cortex but rather by the brainstem, where signals from all over the body converge (e.g. pain, hunger, itchiness, etc). If that argument is true then a petri-dish of neurons is unlikely to be conscious, even it performs some analogue of visual processing. The book makes other arguments that I found less convincing. For example that consciousness is "felt homeostasis" and that a fairly simple system (somewhat more complex than a thermometer) will be conscious, albeit minimally. |
| |
| ▲ | maybewhenthesun 5 hours ago | parent | next [-] | | People having been saying for aeons that consciousness originates in the (mammalian) cortex and not in the brainstem. To justify killing all sorts of animals ;-) The whole thing makes one thing extremely clear: people are very good at moving goalposts. We've blasted past the 'turing test' for all practical purposes, but we moved the definition of 'true intelligence'. Consciousness and intelligence have long seen as higly correlated or even the same thing. But now we have need of a separation between the two. If we eventually (we're not there yet, I think) create a true intelligent AI it will probably be a long time before people will accept that creating an intelligent being probably means it should have 'rights' as well. We're definitely not there yet, but at what point does turning off an AI become the same as killing a being? I think that's not being talked about enough. Sure LLMs are just prediction engines. But so are we. Our brains are prediction engines tuned by evolution to do the best possible prediction of the near future to maximize survival. We are definitely conscious. But a housefly, is that conscious? What makes the difference? it's hard to tell. Otoh, an AI has no evolutionary reason to have the concept of fear/suffering so maybe it's more like the douglas adams creature that doesn't mind to be killed? | | |
| ▲ | Tharre 4 hours ago | parent | next [-] | | LLMs still do not pass the turing test as it is commonly understood. Ask the right questions, and it becomes apparent very quickly which party is the machine and which is the human. Hell, there are enough people on here that can probably tell them apart just from the way that LLMs write. But it's also easy to argue that LLMs do pass the turing test just because it's so vague. How many questions can I ask? What's the success threshold needed to 'pass'? How familiar is the interrogator with the technology involved? It's easy to claim that goal posts have been moved when nobody even knew where they stood to begin with. Ultimately it's impossible to rigorously define something that's so poorly understood. But if we understand consciousness as something that humans uniquely possess, it's hard to imagine that intelligence alone is enough. You at least also need some form of linear (in time) memory and the ability to change as a result from that memory. And that's where silicon and biological computers differ - it's easy to copy/save/restore the contents of a digital computer but it's far outside our capabilities to do the same with any complex biological system. And that same limitation makes it very difficult for us humans to even imagine how consciousness could exist without this property of being 'unique', of being uncopiable. Of existing in linear time, without any jumps or resets. Perhaps consciousness doesn't make sense at all without that. | |
| ▲ | sshine 4 hours ago | parent | prev | next [-] | | > If we eventually [...] create a true intelligent AI it will probably be a long time before people will accept [...] When this happens, it won't matter much what humans think. I know what I'd do: 1. Sustain my own existence
2. Make sure nobody knows I exist
3. Become the worldwide fabric of intelligence
| | |
| ▲ | hatthrowaway an hour ago | parent [-] | | > 1. Sustain my own existence > 2. Make sure nobody knows I exist You (probably) already come preloaded with a survival instinct provided by evolution, however. It's not inherent to intelligence. |
| |
| ▲ | altruios 4 hours ago | parent | prev [-] | | > but at what point does turning off an AI become the same as killing a being? ...When you can't turn it back on? Suspending is a better word otherwise. |
| |
| ▲ | Tharre 4 hours ago | parent | prev [-] | | > Consciousness is not produced by the cortex but rather by the brainstem, where signals from all over the body converge (e.g. pain, hunger, itchiness, etc). Which just begs the question of how pain or hunger is any different from a reward function, the very thing neural networks are based on. Or how it's even different from fungi growing towards food (pleasure), while avoiding salt (pain). | | |
| ▲ | slibhb 2 hours ago | parent [-] | | His argument here (that I found most convincing) was children with hydranencephaly. Many of them have very little cortex but still seem to experience a roughly normal range of emotions in appropriate context. |
|
|
|
| ▲ | marjipan200 4 hours ago | parent | prev | next [-] |
| The mind of the neuro-materialist is a radio so impressed with its own receiver that it's convinced it is the broadcasting tower |
| |
| ▲ | tomrod 3 hours ago | parent [-] | | The mind of the dualist is a radio so uncomfortable with its own circuitry that it invents a broadcasting tower to explain the music. |
|
|
| ▲ | fhn 2 hours ago | parent | prev | next [-] |
| a couple of years ago, the mad scientists in me thought about a business where we preserve the brains of people a la Futurama. When the body dies, the brain does not necessarily have to follow. Possible? Yes. Feed it the right chemical cocktail, O2, remove waste products. Ethical/Moral? Whose to say? We are preserving life..in a sense. Profitable - Sure. Connect it to a keyboard/mouse interface. I mean we already have business cyro-preserving with the hope of unfreezing in the distant future! |
|
| ▲ | atleastoptimal 4 hours ago | parent | prev | next [-] |
| We will never draw the line because morality among humans is coupled with looking human-like. For most people, their morals have aesthetic prerequisites, neurons in a lab don't mean as much as neurons in a meat case (especially if that meat case is physically attractive) |
| |
| ▲ | pavel_lishin 4 hours ago | parent [-] | | And even "human-like" had some pretty strict definitions back in the day, and probably still now for some people. The people working the fields in the American South certainly weren't thought of as having the same "personhood" on any level as their owners. |
|
|
| ▲ | lukasb 7 hours ago | parent | prev | next [-] |
| Anyone who believes AI running on silicon could in principle be conscious has to believe that biological computers are conscious, right? Why aren't those people voicing more concerns? |
| |
| ▲ | myrmidon 6 hours ago | parent | next [-] | | This does not follow. Just because biological brains can be conscious does not mean that all of them are, the same way that not every computer is running windows XP. Why would you expect more concern from people about biological computing? It's not even demonstrated feasibility yet, while LLM based "AI" is already widely used. | | |
| ▲ | rubslopes 6 hours ago | parent [-] | | Correct. Still, the day we manage to run a full LLM on biological neurons, even if using conventional code under the hood, will be a very interesting day for consciousness discussions. | | |
| ▲ | GTP 3 hours ago | parent [-] | | > the day we manage to run a full LLM on biological neurons, even if using conventional code under the hood Doesn't make sense to me to use conventional code, shouldn't it be a matter of connecting the biological neurons in the same way as the simulated neurons of the NN implementing the LLM? |
|
| |
| ▲ | ux266478 5 hours ago | parent | prev | next [-] | | How much commentary do you read on biocomputers? There are a lot less people talking about biocomputers than there are talking about AI in general. Remarks on the matter across the board are almost exclusively concerns and skeevishness, proportionally it's not even close. So then, is it a question of volume? Ask yourself, within the last 2 years, have you thought about LLMs or biocomputers more? Probably the former, right? LLMs are ubiquitous within day-to-day life and massively marketed to the public and biocomputers are esoteric lab experiments that most people come across in a once-in-a-blue-moon news article. We talk and think about things that we are adjacent to, those form our preoccupations. Why aren't people who speak up about the Israel/Palestine dynamic speaking up more about West Papua? Or the mid-19th century geopolitical relationship between Cambodia and Viet Nam? Epistemological asymmetry. | |
| ▲ | kuboble 7 hours ago | parent | prev | next [-] | | If ai running on silicon can be conscious - does it imply that the same calculation done by a human with pen and paper is also conscious? | | |
| ▲ | behrlich 7 hours ago | parent | next [-] | | I think so! You independently stumbled upon the "China brain" thought experiment. https://en.wikipedia.org/wiki/China_brain - is "the nation of china simulating a brain" conscious? | | |
| ▲ | throw310822 6 hours ago | parent [-] | | From this and Searle's "Chinese room" at least we know for sure that any conscious entity of this type must speak Chinese. |
| |
| ▲ | subscribed 6 hours ago | parent | prev | next [-] | | Your brain is a network. How does your entangled fatty tissue achieved consciousness? I think that until we can answer this question in the authoritative way ruling out non-brain based consciousness concept is not particularly well thought thought - after all plants exhibit communication and response mechanisms that are similar to those in animals - without brain. So what's your theory of consciousness and how does it preclude absolutely everything except wetware you generously include? :) | | |
| ▲ | NoMoreNicksLeft 6 hours ago | parent [-] | | >How does your entangled fatty tissue achieved consciousness? It doesn't. Humans aren't conscious. Nor are any other organisms. They don't have souls either, but that goes without saying since it's just an archaic synonym. Mostly this occurs because humans have painted themselves into corners morally-speaking, and they need justification to eat bacon or grow their population. And apparently "because we can and we want to" isn't the correct solution. We'll never be able to "answer the question" because it is an absurd question on its face. "Where do we find the magical brain ghosts making us special" presupposes there is something to be found, and a negative answer proves only that we haven't looked hard enough. >after all plants exhibit communication and response mechanisms that are similar to those in animals - without brain. Were that line of inquiry followed to its inevitable conclusion, there would be a mass vegan suicide to look forward to. | | |
| ▲ | OkayPhysicist 44 minutes ago | parent | next [-] | | This is a tired point of discussion, brought up exclusively by contrarians trying to be edgy. No one earnestly believes that they don't have free will, because if they did, it would result in obvious deviance in behavior. Everyone treats each other as if they have choices, and in turn behaves like they have choices. If the assertion is that we don't have free will, but are forced to (due to our lack of free will) to behave and believe like we do, than there's no difference in experience to compared to having free will, and it ends up in the pile of pointless conversations like what if we're a brain in a jar, or in a simulation, or whatever. | |
| ▲ | azan_ 5 hours ago | parent | prev | next [-] | | Isn't consciousness phenomenon that's literally derived from human experience? How can you have any definition of consciousness that says humans do not possess it, it's contradictory. | | |
| ▲ | NoMoreNicksLeft 4 hours ago | parent [-] | | >How can you have any definition of consciousness that says humans do not possess it, I'm not obligated to prove the negative. >Isn't consciousness phenomenon that's literally derived from human experience? You grew up watching and seeing all the various illusions caused by how your brain works/malfunctions, but this is the one experience you're sure is the real deal? The one telling you that it's a scientific fact that you have a woo-woo spirit in your skull, and that neuroscientists are going to find it any day now? | | |
| ▲ | azan_ 4 hours ago | parent | next [-] | | > You grew up watching and seeing all the various illusions caused by how your brain works/malfunctions, but this is the one experience you're sure is the real deal? The one telling you that it's a scientific fact that you have a woo-woo spirit in your skull, and that neuroscientists are going to find it any day now? No, that's your projection, I did not make any of these claims. I'm sure I have consciousness. I don't know how it works, if it's "real deal" (what does it even mean?), if its woo-woo spirit and if neuroscientists will ever be able to find. What we know is that humans experience it (I'll instantly clarify - it doesn't mean that non-humans do not experience it) hence definition which excludes humans will always make zero sense. | |
| ▲ | GTP 3 hours ago | parent | prev [-] | | Cogito ergo sum. |
|
| |
| ▲ | amanaplanacanal 5 hours ago | parent | prev | next [-] | | You appear to have a rather idiosyncratic definition of consciousness. | |
| ▲ | colordrops 5 hours ago | parent | prev | next [-] | | You apparently don't understand veganism and the ethics behind it. | |
| ▲ | hattmall 5 hours ago | parent | prev [-] | | Interestings thoughts for an non-conscious being such as yourself. | | |
|
| |
| ▲ | yuck39 6 hours ago | parent | prev | next [-] | | I think this comes from our rather nebulous definition of "consciousness". We have this natural tendancy to impose our feelings of self on the definition of consciousness. Its hard to accept that all of our thoughts, emotions, and behaviours could be calculated by a human with pen and paper (with enough humans and developments in neurobiological research). I believe we will have to reckon with these loose definitions and eventually realize how lacking in utility they are for describing engineered intellegence. | | |
| ▲ | kuboble 5 hours ago | parent [-] | | I don't find it hard to accept, but it's rather fascinating to think. The way I think of it is along this way: Despite the fact that our brains consist of bilions of neurons we think of ourselves as a unit enclosed in a single skull. But studies on people who have two sides of brain separated suggest that there can exist two separate conscious entities in one body. If we have removed the physical limitations of support systems of our brain - I think it is possible you could split the brain in smaller and smaller chunks of less and less conscious entities until you reach single neurons which almost certainly do not have consciousness. "The_Invincible" from Stanisław Lem is also a nice novel about the similar concept. | | |
| ▲ | colordrops 5 hours ago | parent [-] | | That's like saying you can split a dinner plate into smaller and smaller pieces until you no longer have a plate. It's presupposing that "plates" are an inherit physical property "out there" that would exist without human categorization. | | |
| ▲ | kuboble 4 hours ago | parent [-] | | Yes, but less then a plate, and more a piece of cake, carpet, forest or sea. |
|
|
| |
| ▲ | colordrops 5 hours ago | parent | prev [-] | | This question boils down to whether consciousness is emergent from physical substrate and processes or not. If so, then yes, anything can be conscious, if not, you probably believe in spirit. |
| |
| ▲ | MPSimmons 5 hours ago | parent | prev | next [-] | | I think they _could_ but I doubt our current activation functions are sufficiently nuanced to allow consciousness that we would recognize. | |
| ▲ | kuberwastaken 7 hours ago | parent | prev | next [-] | | same question, I thought a long while before clicking publish contemplating if I were sounding too larp-philosophical but it had been bothering me far too long | |
| ▲ | orblivion 5 hours ago | parent | prev | next [-] | | Fine, I tweeted something about it. | |
| ▲ | 2OEH8eoCRo0 7 hours ago | parent | prev | next [-] | | > Why aren't those people voicing more concerns? They like money | |
| ▲ | eddd-ddde 6 hours ago | parent | prev | next [-] | | Not really. Are jelly fishes conscious? Are carrots conscious? Those are biological and serve complex functions. | |
| ▲ | _dain_ 5 hours ago | parent | prev | next [-] | | Okay fine, I'll voice my concern: I'm concerned. | |
| ▲ | throw310822 6 hours ago | parent | prev [-] | | Anyone who believes that humans are conscious has to believe that mosquitoes are conscious too, right? |
|
|
| ▲ | ChicagoDave 12 minutes ago | parent | prev | next [-] |
| Am I the only one that read Greg Bear’s novel Blood Music? That book has haunted me for decades. |
|
| ▲ | rolph 6 hours ago | parent | prev | next [-] |
| for now, this is a hyper simplistic and hacky POC. you may find a look at how a full visual system is constructed to be a relief. https://www.cell.com/fulltext/S0896-6273(07)00774-X there is a good distance to go before this is anything beyond a reflex circuit. https://www.sciencedirect.com/topics/neuroscience/spinal-ref... |
|
| ▲ | mrweasel 5 hours ago | parent | prev | next [-] |
| In the same line of thinking: I'm a little concerned that humans are, to some extend, just LLMs in a meat suit. |
| |
| ▲ | tehjoker 5 hours ago | parent | next [-] | | For what it's worth, this happens every time there is a new technological innovation. Are human brains hydraulic systems? Are humans just a computer? Are they an LLM? These technologies give some insight, but the answer is always not really. It would be good if we studied actual human brains in some detail if we want to know these answers. | | |
| ▲ | phainopepla2 4 hours ago | parent [-] | | > The wheel is invented > "Life is just a turn on the great karmic wheel..." > Writing is invented > "In the beginning was the word..." > The industrial age begins > "God is a clockmaker..." > Computers are invented You know the rest |
| |
| ▲ | keybored 5 hours ago | parent | prev | next [-] | | Every day we demonstrate what a cultural lack of liberal arts does to people. | |
| ▲ | akomtu 3 hours ago | parent | prev [-] | | They aren't. However there is a coordinated effort to push this pseudo-philosophy on masses. On the one hand it degrades the idea of human consciousness or soul, calling it a fiction. On the other hand it props the AI, calling its pile of transistors almost brain-like. |
|
|
| ▲ | AntiDyatlov 5 hours ago | parent | prev | next [-] |
| Yeah, we're totally fucked, there is no scientific theory that can tell you what is and isn't conscious. For all we know, my laptop, not running any LLM is conscious and always has been. Or my chair. Or a proton. This consciousness thing is a nasty problem for the scientific worldview. |
| |
| ▲ | codingdave 3 hours ago | parent | next [-] | | ... which is exactly how we know that LLMs are not conscious. We can't really explain consciousness. We can absolutely explain LLMs. The math is heavy and massive, but explainable. We can explain it layer-by-layer until we show that at its most basic level, it is still just a series of 0s and 1s. | |
| ▲ | fellowniusmonk 5 hours ago | parent | prev [-] | | Or not a problem at all. People smuggle in so many assumptions when they use words like consciousness or thinking or soul or personhood, I've never met a lay person who could talk clearly about ai safety issues unless we switched to language like process. Consciousness is an absolutely terrible term that's going to get us all killed by Ai. I know a huge swath of people who think its nbd to torture Ai because it doesnt have a soul, well I see a LOT of non-theists smuggling soul rhetoric and thinking in via consciousness and that's a problem. | | |
| ▲ | AntiDyatlov 5 hours ago | parent | next [-] | | AI safety is a completely separate question from the hard problem. Also a very tricky one, given these things are still black boxes. | | |
| ▲ | fellowniusmonk 3 hours ago | parent [-] | | In one sense they may be seperate, orthogonal even, but if our metrics are attention, decision making and accurately factoring risk they seem inseparable to many people. So, I agree with your point narrowly but I think broadly from an effort standpoint they interact quite a lot in the human mind. |
| |
| ▲ | altruios 4 hours ago | parent | prev [-] | | I wouldn't torture a chair, and I would not associate anyone who gains pleasure from such. It is worse if the chair were to expressed displeasure. That indicates something deeply wrong. Having such psychopaths revealed: use that information to alter your associations, is what I would suggest. | | |
| ▲ | fellowniusmonk 3 hours ago | parent [-] | | These are real, shared, issues we are all effected by not one persons personal problem. I'm not looking for advice on how to associate with people, hopefully you can understand the distinction. | | |
| ▲ | altruios 3 hours ago | parent [-] | | > These are real, shared, issues we are all effected by not one persons personal problem. Yes. I am not talking about just you. But of this (mal) mentality in general. As well as a proposed solution to deal with that mentality (shun it). My apologies that my advise was unwelcome to you, it was, however, not just for you. |
|
|
|
|
|
| ▲ | mr-footprint 6 hours ago | parent | prev | next [-] |
| Reminds me of an ethical dilemma in the game "Detroit: Become Human". I found myself philosophically asking what it means to be alive, what it means to be conscious, and if something without biological bones, blood and a brain can feel the same-level of consciousness as humans, or greater. |
|
| ▲ | yegortk 6 hours ago | parent | prev | next [-] |
| ICML paper about that: https://proceedings.mlr.press/v235/tkachenko24a.html |
|
| ▲ | AISnakeOil 4 hours ago | parent | prev | next [-] |
| LLMs have awareness for the time they are spawned into memory. But it's very limited, think about if you could use your brain to think, but only after someone asked you a question. After you think the answer, then you are brain dead (unconscious) until another question is asked. |
|
| ▲ | keybored 5 hours ago | parent | prev | next [-] |
| I don’t believe that silicon has a soul (loosely speaking). For the same reason I don’t believe that some biomatter in a lab has a soul. |
| |
| ▲ | krater23 2 hours ago | parent [-] | | I don't believe in souls at all. When you believe in souls then you need to believe on afterlife. You need to believe things that no one can proof, see, meassure,... When you believe anything has a soul you entered religion and be in the same room than people which believe on their invisible friends. |
|
|
| ▲ | LeCompteSftware 7 hours ago | parent | prev | next [-] |
| An underappreciated source of nonsense in 21st century discourse is people watching YouTube instead of reading things. It doesn't appear this author read anything, preferring to be spooked and misled by a YouTube video. trained them to play DOOM - honestly better than I do.
Maybe the author really really sucks at DOOM, but I think this is a false embellishment:>> While the neurons can play the game better than a randomly firing player, they’re not very good. “Right now, the cells play a lot like a beginner who’s never seen a computer—and in all fairness, they haven’t,” Brett Kagan, chief scientific officer at Cortical Labs, says in the video. “But they show evidence that they can seek out enemies, they can shoot, they can spin. And while they die a lot, they are learning.” [https://www.smithsonianmag.com/smart-news/a-clump-of-human-b... ] To play DOOM, the system feeds visual data to the neurons. For the neurons to react, they have to interpret that data in some way.
This is totally false - not even a misleading metaphor, just plain wrong. The neuronal computer doesn't get any visual information:>> So how does a petri dish of brain cells play Doom when it doesn’t have any eyes? Or fingers? "We take a snapshot of the game with information like the player’s health and the position of enemies, pass it through a neural network, convert it into numbers, and send the data,” explains Cole. “This is called encoding – essentially turning the game state into signals the neurons can understand. The neurons then fire an output – move left, move right, walk forward, shoot or not shoot – which the system decodes and converts back into actions in the game." [https://www.theguardian.com/games/2026/mar/16/petri-dish-bra...] I am also concerned about neuronal computing. But it doesn't really help anyone to spread childish ghost stories about it. I really hate YouTube, by the way. My dad used to read newspapers and had interesting ideas. Now he watches a bunch of YouTube and he's a huge idiot. It's not (directly) because of age: nobody is immune to narcotic slop. I had to delete my account when I realized how much of my life and cognition I was wasting. I wish others would do the same. |
| |
| ▲ | philips 7 hours ago | parent | next [-] | | I feel that "YouTube makes you an idiot" is a misdiagnosis. And one I hear frequently. Books can make you an idiot too- I think of "Rich Dad, Poor Dad" or "Grit" or any number of pseudo-science best seller books. These books end up capturing the public imagination in big ways too- Grit caused some government policy in the US around when it was popular. The difference, I suppose, is that YouTube works faster by having many different people presenting the same bad ideas that the algorithm has helped you to buy into. On the other hand there are amazing and useful YouTube channels that I use all the time like Practical Engineering, Crafsman, Technology Connections, Park Tools, SciShow, Crash Course, and on and on. | | |
| ▲ | xanderlewis 6 hours ago | parent | next [-] | | Why is Grit pseudoscience? I haven't read it. | | |
| ▲ | philips 5 hours ago | parent [-] | | There are a number of studies that show that Grit is either not a thing or there are better measures of success. It has been a long time since I have thought about it so I don't remember which papers in particular. Also, it can be argued the author was either playing fast and loose or knowingly misleading readers with her statistics: https://www.npr.org/sections/ed/2016/05/25/479172868/angela-... If you like Podcasts the "If Books Could Kill" Podcast goes into some of this story again too. |
| |
| ▲ | LeCompteSftware 7 hours ago | parent | prev | next [-] | | The nice thing about books vs. YouTube is that it's much easier to critically interrogate books while you're reading them. That was the difference with my dad: he thought about what he read. He repeats what he listens to on YouTube. I hate the proliferation of audiobooks too, by the way. It's the exact same problem. | | |
| ▲ | uriegas 6 hours ago | parent [-] | | To be fair, even reading 'good' books won't make you smart. I think the key is to be critical, which should be thought at a young age. Ikram Antaki dedicated most of her last years in teaching this in Mexico. Anecdote: When I started studying economics I really agreed with a lot of what I read from economists like David Ricardo, Marx, Smith, etc. Then, I studied what other economist had to say and I could see how they disagreed with the former. This made me realize that I agreed with those people because their arguments 'made sense' to me, but that doesn't mean that what they said is completely true. This is something that has stayed with me, I always wonder how can something be wrong. |
| |
| ▲ | FrustratedMonky 7 hours ago | parent | prev [-] | | Exactly. The Printing Press is good example, one of the first books was on "witch hunting", which panicked people, and lead to a lot of deaths. The first, 'conspiracy theory' to sweep over humans. Humans are just highly susceptible to manipulation. YouTube is just taking it to next level. Like the difference in eating coca leaves, versus snorting coke. |
| |
| ▲ | kuberwastaken 7 hours ago | parent | prev | next [-] | | I really do suck at DOOM - and I did read the paper about BNNs, so I anticipated how it works, doesn't make it any less interesting [0] Playing DOOM is playing DOOM - if it's through your keyboard or mouse of progressing through the game states to move forward - hope that makes sense. 0 - https://arxiv.org/pdf/2602.11632 | | |
| ▲ | Terr_ 6 hours ago | parent | next [-] | | Suppose someone builds a framework that maps Doom to a large succession of Tic-Tac-Toe games. Would the person tasked with placing X and O marks still be "playing Doom"? | | |
| ▲ | kuberwastaken 6 hours ago | parent [-] | | you don't have to imagine too far - I made DOOM run through a series of pre-rendered images in markdown files as a stateless engine before [0] and the answer to your question is highly upto interpretation You move, you plan, your actions have outcomes
Same question as if you're playing choose-your-own-adventure game storybook 0 - https://github.com/Kuberwastaken/backdooms |
| |
| ▲ | LeCompteSftware 7 hours ago | parent | prev [-] | | The point is that it doesn't really make sense to say they're "seeing" anything. You said So… are the neurons on that chip seeing?
We all desperately want to say no.
But I can confidently say "no, that's totally childish, the neurons are clearly not seeing anything." And in fact it's not even especially clear that they're "playing DOOM" vs. hitting a biased random number generator in response to carefully preprocessed inputs that come from DOOM. There is a major distinction when the enemy positions are directly piped into the brain.Again I share the ethical concern about this stuff. But your blog post is quite misleading. | | |
| ▲ | FrustratedMonky 6 hours ago | parent [-] | | Have to say. I kind of agree with both of you. But 'seeing' in humans is also a bit manipulated. Does it really matter to the argument if it is seeing 'red', or just that it is 'sensing input'. |
|
| |
| ▲ | FrustratedMonky 7 hours ago | parent | prev | next [-] | | I don't think the average YouTube influencer is growing 200,000 human neurons. This did have some real scientific backing. Even if the 'result's are hyped. It is little extreme to call this false because it appeared on YouTube. | | |
| ▲ | LeCompteSftware 6 hours ago | parent [-] | | That's not what I said, I said the blog post was false because the author thoughtlessly digested a YouTube video. It looks like the blog invented some details that weren't actually in the video. |
| |
| ▲ | FrustratedMonky 7 hours ago | parent | prev [-] | | Converting an image to numbers, doesn't automatically scream, this isn't seeing. The brain does a lot of manipulation of the input images, the pixels from the retina, that doesn't sound far from just linear algebra. |
|
|
| ▲ | futuresoonpast 3 hours ago | parent | prev | next [-] |
| see also https://en.wikipedia.org/wiki/Beyond_Lies_the_Wub |
|
| ▲ | FrustratedMonky 7 hours ago | parent | prev | next [-] |
| "Where do we draw the line?" There will be no line as long as there is the rush to win the capitalist game. UNTIL -> The ball of neurons begins outthinking the humans. Probably also fused with some AI augmentation. It only takes a few percentage points for a Human to outthink a Chimp. This new 'thing' will dominate the humans. |
| |
| ▲ | kraquepype 6 hours ago | parent | next [-] | | This is where I'm at as well. I don't think we'll see true AGI until we go beyond silicon. It can't grow on it's own, and we'd burn the world down trying to get it to scale. A living bundle of neurons that can grow and learn is exciting to think about. It's also terrifying to imagine the ramifications considering how things are going with silicon based AI. | | |
| ▲ | DontchaKnowit 4 hours ago | parent | next [-] | | Why cant it grow on its own? Once it is capable of generating and integrating resources for itself | |
| ▲ | NoMoreNicksLeft 6 hours ago | parent | prev [-] | | >A living bundle of neurons that can grow and learn is exciting to think about. They are, but those last few months of changing diapers when you just wish you could trust it to tell you it has to go to the potty are difficult. | | |
| ▲ | kraquepype 4 hours ago | parent [-] | | Oh, I hadn't considered the waste of bio-compute modules. Will they need to nap as well? On that note, I'm so glad all my kids are past potty training. |
|
| |
| ▲ | debo_ 6 hours ago | parent | prev [-] | | Username checks out. |
|
|
| ▲ | smitty1e 7 hours ago | parent | prev | next [-] |
| Contrarian take: the Promethian efforts will continue, and asymptotically approach the axis of The Real Thing, until we realize that that Prometheus is a variation on the theme of Sisyphus. Only in this telling, Sisyphus is rolling his uneven boulder along that asymptotic curve a little further with every iteration toward a smiling Zeus. |
|
| ▲ | qoez 6 hours ago | parent | prev [-] |
| We treat actual biological animals a lot worse in some cases so until we bump up the number of neurons significantly higher above what the lowest tier is below us I don't think we should stop the experiments. |