| ▲ | mannykannot 7 hours ago | |
There's interesting commentary on this paper from Maggie Vale here: https://substack.com/home/post/p-194580145 One of her points is that there are various pesky consequences for AI companies if AI becomes to be seen as conscious, such as what the paper calls the "welfare trap": if AI systems are widely regarded as being conscious or sentient, they will be seen as "moral patients", reinforcing existing concerns over whether they are being treated appropriately. This paper explicitly says that its conclusion "pulls the field of AI safety out of the welfare trap, [allowing] us to focus entirely on the concrete risks of anthropomorphism [by] treating AGI as a powerful but inherently non-sentient tool." | ||
| ▲ | ctoth 7 hours ago | parent [-] | |
You noticed that too huh? It's weird ... It's not like they have to do this? They aren't forced to go full evil company mode by any extrinsic thing but even the way they frame it "welfare trap" trap? for whom? Anthropic is actually trying to do some research into model welfare which I am personally very happy about. I absolutely do not understand people who dismiss it ... wouldn't you like to at least check? doesn't it at least make sense to do the experiments? ? Ask the questions so that we don't find out "oops, yeah we've been causing massive amounts of suffering" here in 10 years? Maybe makes sense to do a little upfront research? Which to be clear this paper is not. | ||