Remix.run Logo
juleiie 2 hours ago

Useful intelligence does not require sentience.

As far as I know, none of LLM models are sentient nor are possible to be in the near future.

I also do not assume so called AGI to be sentient. Merely to be a human level skilled intellectual worker.

In absence of ethical dilemmas of this calibre for the foreseeable future let’s focus on the economy side of things in this particular comment chain.

krainboltgreene 2 hours ago | parent [-]

It must very comforting to be able to decided a "human level worker" isn't sentient.

It makes things so clean.

juleiie 2 hours ago | parent [-]

LLMs cannot possess consciousness for three reasons: they execute as a sequence of Transformer blocks with extremely limited information exchange, these blocks are simple feed-forward networks with no recurrent connections, and the computer hardware follows a modular design.

Shardlow & Przybyła, "Deanthropomorphising NLP: Can a Language Model Be Conscious?" (PLOS One, 2024)

Nature: "There is no such thing as conscious artificial intelligence" (2025)

They argue that the association between consciousness and LLMs is deeply flawed, and that mathematical algorithms implemented on graphics cards cannot become conscious because they lack a complex biological substrate. They also introduce the useful concept of "semantic pareidolia" - we pattern-match consciousness onto things that merely talk convincingly.

They are making a strong argument and I think they are correct. But really these are two different things as I said originally.

krainboltgreene 2 hours ago | parent [-]

You think I'm arguing that LLM's are sentient. I'm not. I never mentioned LLMs.

juleiie 2 hours ago | parent [-]

You are making as strawman about sentience when I was talking about economical impact of abundant intelligence. I should just ignore it but I was curious yet you have nothing valuable to say aside from common misconceptions conflating the two. Thanks for trolling I guess