| ▲ | fellowniusmonk 7 hours ago | ||||||||||||||||
Or not a problem at all. People smuggle in so many assumptions when they use words like consciousness or thinking or soul or personhood, I've never met a lay person who could talk clearly about ai safety issues unless we switched to language like process. Consciousness is an absolutely terrible term that's going to get us all killed by Ai. I know a huge swath of people who think its nbd to torture Ai because it doesnt have a soul, well I see a LOT of non-theists smuggling soul rhetoric and thinking in via consciousness and that's a problem. | |||||||||||||||||
| ▲ | AntiDyatlov 7 hours ago | parent | next [-] | ||||||||||||||||
AI safety is a completely separate question from the hard problem. Also a very tricky one, given these things are still black boxes. | |||||||||||||||||
| |||||||||||||||||
| ▲ | altruios 6 hours ago | parent | prev [-] | ||||||||||||||||
I wouldn't torture a chair, and I would not associate anyone who gains pleasure from such. It is worse if the chair were to expressed displeasure. That indicates something deeply wrong. Having such psychopaths revealed: use that information to alter your associations, is what I would suggest. | |||||||||||||||||
| |||||||||||||||||