Remix.run Logo
The Abstraction Fallacy: Why AI Can Simulate but Not Instantiate Consciousness(philarchive.org)
2 points by shervinafshar 9 hours ago | 1 comments
_wire_ an hour ago | parent [-]

Simulate, instantiate, bloviate, master...

There's a persistent editorial drumbeat that the "intelligence" in AI is literal, but falling slightly short due to a missing something, implying that this something will soon be developed and sentience will appear, absent any criteria for qualifying it beyond "it looks like a duck and walks like a duck"

The arena of model benchmarking is polluted by a related ideology: that as you keep making adjustments to the training and collecting to accommodate edge cases, the models become more innately intelligent.

Yet obvious thought experiments go considered: A key marker of intelligence is self guided adaptation; and this adaptation is not simply about an autonomous agent's arc across a decision tree of domain specific contingencies, life-forms manifest their autonomy from the molecules on up.

As a computer is a made-thing, not a growing-thing, there's no way to consider its autonomy other than as a simulation. So what engineering can account for its spontaneously developing sentience? Such sentience must be added to it by its builders, by the contingency of being made.

Which brings this thought experiment to a challenging conclusion: If the engineering of AI can account for its sentience, there's an astonishing poverty of explanation of the theory that makes it possible. If the engineering is understood, given the lack of any theory of sentience, why don't the engineers simply put the kibosh on silly claims of sentience? And if the engineering isn't understood... Why don't claims made about the suitability of these designs for reasoning qualified by the scope and limits of the designs, as is typical practice in every serious engineering discipline.