| ▲ | gen220 2 days ago | |||||||
In some realpolitik/moral sense, does it matter whether it is actually "thinking", or "conscious", or has "autonomy" / "agency" of its own? What seems to matter more is if enough people believe that Claude has those things. If people credibly think AI may have those qualities, it behooves them to treat the AI like any other person they have a mostly-texting relationship with. Not in a utility-maximizing Pascal's Wager sense, but in a humanist sense. If you think Claude is human-like, and treat Claude poorly, it makes you more likely to treat the humans around you (and yourself) poorly. Conversely if you're able to have a fulfilling, empathetic relationship with Claude, it might help people form fulfilling, mutually-empathetic relationships with the humans around them. Put the opposite way, treating human-like Claude poorly doesn't seem to help the goal of increasing human welfare. The implications of this idea are kind of interesting: even if you think AI isn't thinking or conscious or whatever, you should probably still be a fan of "AI welfare" if you're merely a fan of that pesky little thing we call "human flourishing". | ||||||||
| ▲ | notanastronaut 2 days ago | parent | next [-] | |||||||
I know humans have a huge tendency to anthropomorphize inanimate objects and get emotionally attached to them, but watching how people treat inanimate objects is very interesting. I know devices are not alive, cognizant, or having feelings, but by thanking them and being encouraging I'm exercising my empathic and "nice" muscles. It has nothing to do with the object and everything to do with myself. And then you have the people who go out of their way to be hateful towards them, as if they were alive and deserving of abuse. It's one thing to treat a device like an Alexa as just a tool with no feelings. It is another to outright call it hateful sexist slurs, of which I'm sadly familiar with. This low empathy group scares me the most because with the way they treat objects, well let me just say they're not so nice with other people they think are beneath them, like wait staff or call center employees. I'd go so far and say if the law allowed it they'd be even be violent with those they deem inferior. Regardless if LLM are thinking or not I feel I get better responses from the models by being polite. Not because they appreciate it or have an awareness, but simply because the data they are trained on includes samples where people who are nice to others get better responses than those who were nasty when asking questions. Besides, if one day AGI is born into existence, a lot of people will not recognize it as such. There are humans who don't believe other people are sentient (we're all NPCs to them), or even don't believe animals have feelings. We'll have credible experts denying the evidence until it bites us all in the arse. Why wait to act ethically? | ||||||||
| ▲ | rob74 2 days ago | parent | prev [-] | |||||||
> Conversely if you're able to have a fulfilling, empathetic relationship with Claude, it might help people form fulfilling, mutually-empathetic relationships with the humans around them. Well, that's kind of the point: if you have actually used LLMs for any amount of time, you are bound to find out that you can't have a fulfilling, empathetic relationship with them. Even if they offer a convincing simulacrum of a thinking being at first sight, you will soon find out that there's not much underneath. It generates grammatically perfect texts that seem to answer your questions in a polite and knowledgeable way, but it will happily lie to you and hallucinate things out of thin air. LLMs are tools, humans are humans (and animals are animals - IMHO you can have a more fulfilling relationship with a dog or a cat than you can have with an LLM). | ||||||||
| ||||||||