Remix.run Logo
juleiie 2 hours ago

LLMs cannot possess consciousness for three reasons: they execute as a sequence of Transformer blocks with extremely limited information exchange, these blocks are simple feed-forward networks with no recurrent connections, and the computer hardware follows a modular design.

Shardlow & Przybyła, "Deanthropomorphising NLP: Can a Language Model Be Conscious?" (PLOS One, 2024)

Nature: "There is no such thing as conscious artificial intelligence" (2025)

They argue that the association between consciousness and LLMs is deeply flawed, and that mathematical algorithms implemented on graphics cards cannot become conscious because they lack a complex biological substrate. They also introduce the useful concept of "semantic pareidolia" - we pattern-match consciousness onto things that merely talk convincingly.

They are making a strong argument and I think they are correct. But really these are two different things as I said originally.

krainboltgreene 2 hours ago | parent [-]

You think I'm arguing that LLM's are sentient. I'm not. I never mentioned LLMs.

juleiie 2 hours ago | parent [-]

You are making as strawman about sentience when I was talking about economical impact of abundant intelligence. I should just ignore it but I was curious yet you have nothing valuable to say aside from common misconceptions conflating the two. Thanks for trolling I guess