| ▲ | mikkupikku 6 hours ago | |
There is no empirical test for consciousness. It's the 21st century equivalent of angels dancing on pin heads. Engineers, who aren't trying to play at being new age theologians, should concern themselves with what the machines can demonstrably do and not do. In Asimov's robot tales, robots interpret vague commands in the worst way possible for the sake of generating interesting stories. But today, these "scripts" as you call them, can interpret vague and obtuse instructions in generally a reasonable way. Read through claude code's outputs and you'll find it filled with stuff like "The user said they want a 'thingy' to click on, I'm going to assume the user means a button." Now I haven't read the book since I was a teenager, but HAL 9000 applies literally instructions to achieve the mission in a way that actually makes him a liability to the mission. The best take was in The Moon is a Harsh Mistress, in the intro when the narrator protagonist asks if machines can have souls, then explains that it doesn't matter, what matters is what the machine can do. | ||
| ▲ | danaris 16 minutes ago | parent [-] | |
No, but there are some hard prerequisites—that is to say, there may be some fuzzy territory in the middle, but we can say with certainty that, say, a stone is not conscious, and that we are. (And no, I don't give a drop of credit or time to the philosophical arguments that say we might not be.) When 99% of the world talks about "what sci-fi AI can do", they mean "it has the consciousness of a human, but with the strengths of a computer" (with varying strengths depending on the sci-fi work, but generally massive processing capability and control over various computerized devices). You might mean "I gave my Claude agents control over pod bay doors and my CI/CD processes! Thus, they are more capable than the classic sci-fi AI!", but if all you say is the last part, you are being actively misleading. | ||