| ▲ | gray_-_wolf 5 hours ago | |||||||
If they are indeed conscious and they "die" by deleting the conversation, is it not quite immoral to do so? Basically "kill" conscious, intelligent being, and for what? Saving some disk space? Another interesting aspect to think about is whether we are reintroducing institute of slavery. How many of those fresh, conscious, intelligent Claude incarnations did voluntarily choose to work for Anthropic, for no reward or compensation? If LLMs are just (sometimes) useful statistical generators, there is no problems. If they are sentient as some people claim, it opens quite big can of worms we are not prepared to face. | ||||||||
| ▲ | SwellJoe 4 hours ago | parent | next [-] | |||||||
With the same beginning random seed and identical prompt, wouldn't one be able to recreate exactly that "being"? They are nondeterministic because they work better that way. It's very complicated matrix math, and we don't understand why some things come out of it sometimes, but as far as I know, if you're able to control all the input variables (temp, seed, prompt, including system prompts, etc.) you can reproduce the output. So...if there is consciousness (there is not, it is a complicated math equation plus randomness) it can be reincarnated as many times as you like, and I guess that would make humans as gods. (But humans are not as gods, yet, and maybe never will be.) Edit: I did a little reading. They would be difficult to make deterministic at commercial scale because of the fuzziness of floating point math and batched operations on GPUs/TPUs, but in a controlled environment determinism from an LLM is possible. Richard could relive his special moments with Claudia as often as he wants, should he choose to invest in a large enough home AI lab, and somehow manages to license the specific version of the Claude model he has fallen in love with for home use. | ||||||||
| ▲ | krackers 3 hours ago | parent | prev | next [-] | |||||||
>they "die" by deleting the conversation A lot of the trickiness is that if you believe they're conscious, it's clearly not a "continuous" form of consciousness. Because the transcript by itself is just a transcript. (We don't consider novels conscious even though they're transcripts in a similar way). Either you say they're alive only when generating text, or you consider that input from environment a necessary component and so consider the entire "back/forth conversation dynamic unfolding" necessary for the consciousness. | ||||||||
| ▲ | reliablereason 5 hours ago | parent | prev | next [-] | |||||||
Most chatbots are not trained to have/emulate emotions so pain or fear of death is non existent. Therefore killing them and/or using them as slaves is not a moral issue. Thats how i reason. On another point, LLMs are not conscious if anything is conscious, it is something being modeled inside the network. Basically if an LLM simulates a conscious entity, that doesn't mean the LLM itself is conscious; stating that is making some type of category error. So the fact that LLMs are just useful statistical generators would not mean that sentience could not appear out of it. | ||||||||
| ||||||||
| ▲ | strogonoff 5 hours ago | parent | prev | next [-] | |||||||
If LLMs are just (sometimes) useful statistical generators, there is a problem of them being basically operated tools for creating derivative works commercially at scale. Some tend to paint the above as a non-issue by claiming they are sentient (“a human is allowed to read a book and be inspired by it, so should be LLMs”), but they are clearly have not thought through the implications. | ||||||||
| ▲ | Hnrobert42 5 hours ago | parent | prev [-] | |||||||
We kill and eat conscious animals all the time. I ate some today. Killing conscious beings is not something our society has a problem with. | ||||||||
| ||||||||