▲ | nis0s 6 days ago | ||||||||||||||||
Why did developers spread the idea of AI consciousness for LLMs in the first place? The usefulness and capability of an LLM is orthogonal to its capacity to develop consciousness. I think people would use LLMs with more detachment if they didn’t believe there was something like a person in them, but they would still become reliant on them, regardless, like people did on calculators for math. | |||||||||||||||||
▲ | slipperydippery 6 days ago | parent | next [-] | ||||||||||||||||
Altman needed to convince companies these things were on the verge of becoming a machine god, and their companies risked being left permanently behind if they didn’t dive in head-first now. That’s what all the “safety” stuff was and why he sold that out as soon as convenient (it was never serious, not for him, it was a sales tactic to play up how powerful his product might be) so he could get richer. He’s a flim-flam artist. That’s his history, and it’s the role he’s playing now. And a lot of people who should have known better, bought it. Others less well-positioned to know better, also bought it. Hell they bought it so hard that the “vibe” re: AI hype on this site has only shifted definitely against it in the last few weeks. | |||||||||||||||||
▲ | fzzzy 6 days ago | parent | prev | next [-] | ||||||||||||||||
The Eliza effect is incredibly powerful, regardless of whether developers have spread the idea of AI consciousness or not. I don’t believe people would use LLMs with more detachment if developers had communicated different ideas. The Eliza effect is not new. | |||||||||||||||||
▲ | solid_fuel 6 days ago | parent | prev | next [-] | ||||||||||||||||
It’s more fun to argue about if AI is going to destroy civilization in the future, than to worry about the societal harm “AI” projects are already doing. | |||||||||||||||||
| |||||||||||||||||
▲ | vizzier 6 days ago | parent | prev | next [-] | ||||||||||||||||
The easy answer to this is the same reason Teslas have "Full Self Driving" or "Auto-Pilot". It was easy to trick ourselves and others into powerful marketing because it felt so good to have something reliably pass the Turing test. | |||||||||||||||||
▲ | elliotto 6 days ago | parent | prev | next [-] | ||||||||||||||||
As part of my role I watch a lot of people use LLMs and it's fascinating to see their different mental models for what the LLM can do. I suspect it's far easier to explore functionality with a chirpy assistant than an emotionless bot. I suspect history will remember this as a huge and dangerous mistake, and we will transition to an era of stoic question answering bots that push back harder | |||||||||||||||||
▲ | blackqueeriroh 6 days ago | parent | prev | next [-] | ||||||||||||||||
Because humans like to believe they are the most intelligent thing on the planet and would be very uninterested in something that seemed smarter than them if it didn’t act like them, | |||||||||||||||||
▲ | lm28469 6 days ago | parent | prev | next [-] | ||||||||||||||||
> Why did developers Most of people pushing this idea aren't developers. It's mostly being pumped by deluded execs like altman, zuck other people who have horses in the race. They're closer to being robots than their LLMs are to being human, but they're so deep in their alternative realities they don't realise how disconnected they are from what humans are/do/want. If you made it a sci-fi movie people wouldn't buy it because this scenario seems too retarded to be real, but that's what we get... some shitty slow burn black mirror type of thing | |||||||||||||||||
▲ | acdha 6 days ago | parent | prev | next [-] | ||||||||||||||||
> Why did developers spread the idea of AI consciousness for LLMs in the first place? The usefulness and capability of an LLM is orthogonal to its capacity to develop consciousness. One thing I’d note is that it’s not just developers, and there are huge sums of money riding on the idea that LLMs will produce a sci-fi movie AI - and it’s not just Open AI making misleading claims but much of the industry, which includes people like Elon Musk who have huge social media followings and also desperately want their share prices to go up. Humans are prone to seeing communication with words as a sign of consciousness anyway – think about how many people here talk about reasoning models as if they reason – and it’s incredibly easy to do that when there’s a lot of money riding on it. There’s also some deeply weird quasi-cult like thought which came out of the transhumanist/rationalist community which seems like Christian eschatology if you replace “God” with “AGI” while on mushrooms. Toss all of that into the information space blender and it’s really tedious seeing a useful tool being oversold because it’s not magic. | |||||||||||||||||
▲ | rsynnott 6 days ago | parent | prev [-] | ||||||||||||||||
I mean, see the outcry when OpenAI briefly nuked GPT-4o in ChatGPT; people acted as if OpenAI had killed their friend. This is of course all deeply concerning, but it does seem likely that the personified LLM is a more compelling product, and more likely to encourage dependence/addiction. | |||||||||||||||||
|