| ▲ | adamzwasserman 2 days ago | |||||||||||||
I've written a full response to Somers' piece: The Case That A.I. Is Thinking: What The New Yorker Missed: https://emusings.substack.com/p/the-case-that-ai-is-thinking... The core argument: When you apply the same techniques (transformers, gradient descent, next-token prediction) to domains other than language, they fail to produce anything resembling "understanding." Vision had a 50+ year head start but LLMs leapfrogged it in 3 years. That timeline gap is the smoking gun. The magic isn't in the neural architecture. It's in language itself—which exhibits fractal structure and self-similarity across scales. LLMs navigate a pre-existing map with extraordinary regularity. They never touch the territory. | ||||||||||||||
| ▲ | scarmig 2 days ago | parent [-] | |||||||||||||
The core objection I'd have to your argument: humans also don't have privileged access to the territory. Neurons don't have some metaphysical super power that let them reach into the True Reality; all there are are maps encoded in our neural circuitry by learning rules that evolution has developed because those learned maps lead to greater reproductive success. If direct access to reality is what's needed, then it's true that machines are incapable of thinking; but then so are humans. | ||||||||||||||
| ||||||||||||||