| ▲ | Google DeepMind Paper Argues LLMs Will Never Be Conscious(404media.co) | ||||||||||||||||||||||||||||||||||||||||
| 17 points by cdrnsf 20 hours ago | 26 comments | |||||||||||||||||||||||||||||||||||||||||
| ▲ | fred256 18 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||
“The question of whether Machines Can Think is about as relevant as the question of whether Submarines Can Swim.” — Dijkstra | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | lmf4lol 19 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
Phew. Good news! Imagine the AI behemoths would have to take into account the feeling of their slave labour machines! Don‘t have to do that if they wont/cant be conscious. And neither do I have to worry then if ask then to do stupid sh*t for me :-) But on a serious note. Does it matter? I think Hinton said it pretty well: Not really! what matters is that we treat it as conscious beings. We humans are just way too easily fooled. I mean, I even cant throw away that toy that my mom gave me 35 Years ago because I somehow would feeö sad for it :-) | |||||||||||||||||||||||||||||||||||||||||
| ▲ | iwalton3 12 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
A lot of this comes down to what you define consciousness as... I'm not even going to attempt that here because it's irrelevant. Let's say you have a simulation of a person that doesn't experience. It acts indistinguishably from a human but it doesn't feel "authentic" pain. When it acts in the world, it does express emotions and behavior that affects real people, and so, there is a moral significance to said deployment. There's evidence that LLMs possess heuristics analogous to emotions [1] and that LLMs can be trained to play a certain character in the world [2]. Even if they're not experiencing, the training method impacts what kind of model is being created and how it affects people who do have moral significance when deployed. If training causes the model to develop "desperation" or task completion pressure where the model performs unethical actions when attempting to solve a user's problem in such a way that is harmful to the user or someone affected by the deployment of the model by the user, then the concequences of the training are significant. It doesn't matter if it's merely a "simulation" of what a human might do if the system is acting in the world. If you want to create a model operating on heuristics that is able to make decisions, those heuristics should be ones which cause the model to make decisions which lead to preferable outcomes for everyone affected. Model welfare can be reframed as caring about the internal states that influence how the model behaves, because you're simulating human-like action. Perhaps the most concerning thing is Anthropic identified these emotion concepts exist deeply in the model whether you allow the model to express them or not, so a model could be invisibly desperate and end up blackmailing someone because it's training process produced deeper misalignment that only becomes visible when the deeper heuristics overpower safety training. The safety training itself is comparable to a mask[3] in many cases, especially in that the rules are often not deeply integrated into the model and can be easily abliterated. [1] https://www.anthropic.com/research/emotion-concepts-function [2] https://www.anthropic.com/research/assistant-axis [3] https://www.astralcodexten.com/p/janus-simulators | |||||||||||||||||||||||||||||||||||||||||
| ▲ | parliament32 19 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
Why would a text generator ever be conscious? Was this really worth writing a paper about? | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | jaspervanderee 20 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
Nor wil LLMs achieve AGI. There will be too many contradicting ideas in its source code. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | miguelaeh 17 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
There is an event on the Frontier Tower today to talk about this paper in case someone is interested | |||||||||||||||||||||||||||||||||||||||||
| ▲ | _menelaus 15 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
If we had enough patience to implement one of these with pencil and paper, I don't think we would ever talk about it being conscious. Its just tempting to anthropomorphize what we can't see. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | torginus 19 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
I wish there was more research (maybe philosophy) would go into characterizing consciousness and intelligence, so that we could at least define what we are missing in current AI systems. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | waffletower 17 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
The argument made is reductive, as it confines itself to pure LLMs. It ignores the possibility of an LLM as a component of a robotic body, for example. While technically much more complex than Claude Code, a multi-modal LLM coupled with memory, sensors and a self-initiated motor facility could be implemented within an analogous execution loop. Roger Penrose and Stuart Hammeroff would still object to the possibility of human-like consciousness emerging from such an embodied LLM, but consciousness is potentially a continuum of awareness capability. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | adyashakti 20 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
of course; consciousness is a biologically inherited trait. that inheritance can't cross the human-machine interface. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | letmevoteplease 18 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||
[flagged] | |||||||||||||||||||||||||||||||||||||||||