| ▲ | maybewhenthesun 6 hours ago | |||||||
People having been saying for aeons that consciousness originates in the (mammalian) cortex and not in the brainstem. To justify killing all sorts of animals ;-) The whole thing makes one thing extremely clear: people are very good at moving goalposts. We've blasted past the 'turing test' for all practical purposes, but we moved the definition of 'true intelligence'. Consciousness and intelligence have long seen as higly correlated or even the same thing. But now we have need of a separation between the two. If we eventually (we're not there yet, I think) create a true intelligent AI it will probably be a long time before people will accept that creating an intelligent being probably means it should have 'rights' as well. We're definitely not there yet, but at what point does turning off an AI become the same as killing a being? I think that's not being talked about enough. Sure LLMs are just prediction engines. But so are we. Our brains are prediction engines tuned by evolution to do the best possible prediction of the near future to maximize survival. We are definitely conscious. But a housefly, is that conscious? What makes the difference? it's hard to tell. Otoh, an AI has no evolutionary reason to have the concept of fear/suffering so maybe it's more like the douglas adams creature that doesn't mind to be killed? | ||||||||
| ▲ | slibhb an hour ago | parent | next [-] | |||||||
> If we eventually (we're not there yet, I think) create a true intelligent AI it will probably be a long time before people will accept that creating an intelligent being probably means it should have 'rights' as well. In my view, the best LLMs clearly pass the bar for intelligence. I highly doubt they have consciousness. So the revelation of LLMs is that consciousness is not necessary for intelligence. | ||||||||
| ▲ | Tharre 5 hours ago | parent | prev | next [-] | |||||||
LLMs still do not pass the turing test as it is commonly understood. Ask the right questions, and it becomes apparent very quickly which party is the machine and which is the human. Hell, there are enough people on here that can probably tell them apart just from the way that LLMs write. But it's also easy to argue that LLMs do pass the turing test just because it's so vague. How many questions can I ask? What's the success threshold needed to 'pass'? How familiar is the interrogator with the technology involved? It's easy to claim that goal posts have been moved when nobody even knew where they stood to begin with. Ultimately it's impossible to rigorously define something that's so poorly understood. But if we understand consciousness as something that humans uniquely possess, it's hard to imagine that intelligence alone is enough. You at least also need some form of linear (in time) memory and the ability to change as a result from that memory. And that's where silicon and biological computers differ - it's easy to copy/save/restore the contents of a digital computer but it's far outside our capabilities to do the same with any complex biological system. And that same limitation makes it very difficult for us humans to even imagine how consciousness could exist without this property of being 'unique', of being uncopiable. Of existing in linear time, without any jumps or resets. Perhaps consciousness doesn't make sense at all without that. | ||||||||
| ▲ | sshine 5 hours ago | parent | prev | next [-] | |||||||
> If we eventually [...] create a true intelligent AI it will probably be a long time before people will accept [...] When this happens, it won't matter much what humans think. I know what I'd do: | ||||||||
| ||||||||
| ▲ | altruios 5 hours ago | parent | prev [-] | |||||||
> but at what point does turning off an AI become the same as killing a being? ...When you can't turn it back on? Suspending is a better word otherwise. | ||||||||