Remix.run Logo
Symmetry 3 days ago

"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." - Edsger Dijkstra

oergiR 3 days ago | parent | next [-]

There is more to this quote than you might think.

Grammatically, in English the verb "swim" requires an "animate subject", i.e. a living being, like a human or an animal. So the question of whether a submarine can swim is about grammar. In Russian (IIRC), submarines can swim just fine, because the verb does not have this animacy requirement. Crucially, the question is not about whether or how a submarine propels itself.

Likewise, in English at least, the verb "think" requires an animate object. the question whether a machine can think is about whether you consider it to be alive. Again, whether or how the machine generates its output is not material to the question.

brianpan 3 days ago | parent [-]

I don't think the distinction is animate/inanimate.

Submarines sail because they are nautical vessels. Wind-up bathtub swimmers swim, because they look like they are swimming.

Neither are animate objects.

In a browser, if you click a button and it takes a while to load, your phone is thinking.

viccis 3 days ago | parent | prev | next [-]

He was famously (and, I'm realizing more and more, correctly) averse to anthropomorphizing computing concepts.

pegasus 3 days ago | parent | prev | next [-]

I disagree. The question is really about weather inference is in principle as powerful as human thinking, and so would deserve to be applied the same label. Which is not at all a boring question. It's equivalent to asking weather current architectures are enough to reach AGI (I myself doubt this).

esafak 3 days ago | parent | prev | next [-]

I think it is, though, because it challenges our belief that only biological entities can think, and thinking is a core part of our identity, unlike swimming.

roadside_picnic 3 days ago | parent | next [-]

> our belief that only biological entities can think

Whose belief is that?

As a computer scientist my perspective of all of this is as different methods of computing and we have a pretty solid foundations on computability (though, it does seem a bit frightening how many present-day devs have no background in the foundation of the Theory of Computation). There's a pretty common naive belief that somehow "thinking" is something more or distinct from computing, but in actuality there are very few coherent arguments to that case.

If, for you, thinking is distinct from computing then you need to be more specific about what thinking means. It's quite possible that "only biological entities can think" because you are quietly making a tautological statement by simply defining "thinking" as "the biological process of computation".

> thinking is a core part of our identity, unlike swimming.

What does this mean? I'm pretty sure for most fish swimming is pretty core to its existence. You seem to be assuming a lot of metaphysically properties of what you consider "thinking" such that it seems nearly impossible to determine whether or not anything "thinks" at all.

goatlover 3 days ago | parent | next [-]

One argument for thinking being different from computing is that thought is fundamentally embodied, conscious and metaphorical. Computing would be an abstracted activity from thinking that we've automated with machines.

roadside_picnic 3 days ago | parent [-]

> embodied, conscious and metaphorical

Now you have 3 terms you also need to provide proper definitions of. Having studied plenty of analytical philosophy prior to computer science, I can tell you that at least the conscious option is going to trip you up. I imagine the others will as well.

On top of that, these, at least at my first guess, seem to be just labeling different models of computation (i.e. computation with these properties is "thinking") but it's not clear why it would be meaningful for a specific implementation of computation to have these properties. Are there tasks that are non-computable that are "thinkable"? And again it sounds like you're wandering into tautology land.

kapone 3 days ago | parent | prev [-]

[dead]

energy123 3 days ago | parent | prev [-]

The point is that both are debates about definitions of words so it's extremely boring.

throwawayq3423 3 days ago | parent | next [-]

except for the implications of one word over another are world-changing

pegasus 3 days ago | parent | prev [-]

They can be made boring by reducing them to an arbitrary choice of definition of the word "thinking", but the question is really about weather inference is in principle as powerful as human thinking, and so would deserve to be applied the same label. Which is not at all a boring question. It's equivalent to asking weather current architectures are enough to reach AGI.

roadside_picnic 3 days ago | parent [-]

> inference is in principle as powerful as human thinking

There is currently zero evidence to suggest that human thinking violates any of the basics principles of the theory of computation nor extend the existing limits of computability.

> Which is not at all a boring question.

It is because you aren't introducing any evidence to theoretically challenge what we've already know about computation for almost 100 years now.

pegasus 3 days ago | parent [-]

> There is currently zero evidence...

Way smarter people than both of us disagree: among them being Roger Penrose, who wrote two books on this very subject.

See also my comment here: https://news.ycombinator.com/item?id=45804258

"There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy"

roadside_picnic 3 days ago | parent [-]

Can you just point me to the concrete examples (the most compelling examples in the book would work) where we can see "thinking" that performs something that is currently considered to be beyond the limits of computation?

I never claimed no one speculates that's the case, I claimed there was no evidence. Just cite me a concrete example where the human mind is capable of computing something that violates the theory of computation.

> "There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy"

Fully agree, but you are specifically discussing philosophical statements. And the fact that the only response you have is to continue to pile undefined terms and hand wave metaphysics doesn't do anything to further your point.

You believe that computing machines lack something magical that you can't describe that makes them different than humans. I can't object to your feelings about that, but there is literally nothing to discuss if you can't even define what those things are, hence this discussion is, as the original parent comment mention, is "extremely boring".

pegasus 2 days ago | parent [-]

The kind of hard evidence you're asking for doesn't exist for either side of the equation. There is no computational theory of the mind which we could test "in the field" to see if it indeed models all forms of human expression. All we have is limited systems which can compete with humans in certain circumscribed domains. So, the jury's very much still out on this question. But a lot of people (especially here on HN) just assume the zero hypothesis to be the computable nature of brain and indeed, the universe at large. Basically, Digital Physics [1] or something akin to it. Hence, only something that deviates from this more or less consciously adhered-to ontology is considered in need of proof.

What keeps things interesting is that there are arguments (on both sides) which everyone can weigh against each other so as to arrive at their own conclusions. But that requires genuine curiosity, not just an interest in confirming one's own dogmas. Seems like you might be more of this latter persuasion, but in case you are not, I listed a couple of references which you could explore at your leisure.

I also pointed out that one of the (if not the) greatest physicists alive wrote two books on a subject which you consider extremely boring. I would hope any reasonable, non-narcissistic person would conclude that they must have been missing out on something. It's not like Roger Penrose is so bored with his life and the many fascinating open questions he could apply his redutable mind to, that he had to pick this particular obviously settled one. I'm not saying you should come to the same conclusions as him, just plant a little doubt around how exactly "extremely boring" these questions might be :)

[1] https://en.wikipedia.org/wiki/Digital_physics

roadside_picnic 2 days ago | parent [-]

> There is no computational theory of the mind which we could test "in the field" to see if it indeed models all forms of human expression.

I suspect the core issue here isn't my "lack of curiosity" but your lack of understanding about the theory of computation.

The theory computation builds up various mathematical models and rules for how things are computed, not by computers, how things are computed period. The theory of computation holds as much for digital computers as it does for information processing of yeast in a vat.

Evidence that human minds (or anything really) do something other than what's computational would be as simple as "look we can solve the halting problem" or "this task can be solved in polynomial time by humans". Without evidence like that, then there is no grounds for attacking the fundamental theory.

> What keeps things interesting is that there are arguments (on both sides) which everyone can weigh against each other so as to arrive at their own conclusions.

Conclusions about what? You haven't even stated your core hypothesis. Is it "Human brains are different than computers"? Sure that's obvious, but are the different in an interesting way? If it's "computers can think!" then you just need to describe what thinking is.

> how exactly "extremely boring" these questions might be :)

Again, you're misunderstanding, because my point is that you haven't even asked the question clearly. There is nothing for me to have an opinion about, hence why it is boring. "Can machines think?" is the same as asking "Can machines smerve?" If you ask "what do you mean by 'smerve'?" and I say "see you're not creative/open-minded enough about smerving!" you would likely think that conversation was uninteresting, especially if I refused to define 'smerving' and just kept making arguments from authority and criticizing your imaginative capabilities.

pegasus 2 days ago | parent [-]

In your previous comment, you seemed to have no problem grasping what I mean by "can computers think?" - namely (and for the last time): "can computers emulate the full range of human thinking?", i.e. "is human thinking computational?". My point is that this is an open, and furthermore fascinating question, not at all boring. There are arguments on each side, and no conclusive evidence which can settle the question. Even in this last comment of yours you seem to understand this, because you again ask for hard evidence for non-computational aspects of human cognition, but then in the last paragraph you again regress to your complaint of "what are we even arguing about?". I'm guessing you realize you're repeating yourself so try to throw in everything you can think of to make yourself feel like you've won the argument or something. But it's dishonest and disrespectful.

And yes, you are right about the fact that we can imagine ways a physical system could provably be shown to be going beyond the limits of classical or even quantum computation. "Look we can solve the halting problem" comes close to the core of the problem, but think a bit what that would entail. (It's obvious to me you never thought deeply about these issues.) The halting problem by definition cannot have a formal answer: there cannot be some mathematical equation or procedure which given a turing machine decides, in bounded time, whether that machine ultimately stops or not. This is exactly what Alan Turing showed, so what you are naively asking for is impossible. But this in now way proves that physical processes are computational. It is easy to imagine deterministic systems which are non-computable.

So, the only way one could conceivably "solve the halting problem", is to solve it for certain machines and classes of machines, one at a time. But since a human life is finite, this could never happen in practice. But if you look at the whole of humanity together and more specifically their mathematical output over centuries as one cognitive activity, it would seem that yes, we can indeed solve the halting problem. I.e. so far we haven't encountered any hurdles so intimidating that we just couldn't clear them or at least begin to clear them. This is, in fact one of Penrose's arguments in his books. It's clearly and necessarily (because of Turing's theorem) not an airtight argument and there are many counter-arguments and counter-counter-arguments and so on, you'd have to get in the weeds to actually have a somewhat informed opinion on this matter. To me it definitely moves the needle towards the idea that there must be a noncomputational aspect to human cognition, but that's in addition to other clues, like pondering certain creative experiences or the phenomenon of intuition - a form of apparently direct seeing into the nature of things which Penrose also discusses, as does the other book I mentioned in another comment on this page. One of the most mind bending examples being Ramanujan's insights which seemed to arrive to him, often in dreams, fully-formed and without proof or justification even from some future mathematical century.

In conclusion, may I remark that I hope I'm talking to a teeneger, somewhat overexcited, petulant and overconfident, but bright and with the capacity to change and growth nonetheless. I only answered in the hopes that this is the case, since the alternative is too depressing to contemplate. Look up these clues I left you. ChatGPT makes it so easy these days, as long as you're open to have your dogmas questioned. But I personally am signing off from this conversation now, so know that whatever you might rashly mash together on your keyboard in anger will be akin to that proverbial tree falling in a forest empty of listening subjects. Wishing you all the best otherwise.

PS: machines can totally smerve! :)

handfuloflight 3 days ago | parent | prev [-]

What an oversimplification. Thinking computers can create more swimming submarines, but the inverse is not possible. Swimming is a closed solution; thinking is a meta-solution.

yongjik 3 days ago | parent | next [-]

Then the interesting question is whether computers can create more (better?) submarines, not whether they are thinking.

gwd 3 days ago | parent | prev | next [-]

I think you missed the point of that quote. Birds fly, and airplanes fly; fish swim but submarines don't. It's an accident of language that we define "swim" in a way that excludes what submarines do. They move about under their own power under the water, so it's not very interesting to ask whether they "swim" or not.

Most people I've talked to who insist that LLMs aren't "thinking" turn out to have a similar perspective: "thinking" means you have to have semantics, semantics require meaning, meaning requires consciousness, consciousness is a property that only certain biological brains have. Some go further and claim that reason, which (in their definition) is something only human brains have, is also required for semantics. If that's how we define the word "think", then of course computers cannot be thinking, because you've defined the word "think" in a way that excludes them.

And, like Dijkstra, I find that discussion uninteresting. If you want to define "think" that way, fine, but then using that definition to insist LLMs can't do a thing because it can't "think" is like insisting that a submarine can't cross the ocean because it can't "swim".

goatlover 3 days ago | parent | next [-]

Reading the quote in context seems to indicate Dijkstra meant something else. His article is a complaint about overselling computers as doing or augmenting the thinking for humans. It's funny how the quote was lifted out of an article and became famous on it's own.

https://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD867...

handfuloflight 3 days ago | parent | prev | next [-]

Then you're missing the point of my rebuttal. You say submarines don't swim [like fish] despite both moving through water, the only distinction is mechanism. Can AI recursively create new capabilities like thinking does, or just execute tasks like submarines do? That's the question.

gwd 2 days ago | parent [-]

> Can AI recursively create new capabilities like thinking does, or just execute tasks like submarines do? That's the question.

Given my experience with LLMs, I think that they could, but that they're handicapped by certain things at the moment. Haven't you ever met someone who was extremely knowledgable and perceptive at certain tasks, but just couldn't keep on target for 5 minutes? If you can act as a buffer around them, to mitigate their weak points, they can be a really valuable collaborator. And sometimes people like that, if given the right external structure (and sometimes medication), turn out to be really capable in their own right.

Unfortunately it's really difficult to give you a sense of this, without either going into way too much detail, or speaking in generalities. The simpler the example, the less impressive it is.

But here's a simple example anyway. I'm developing a language-learning webapp. There's a menu that allows you to switch between one of the several languages you're working on, which originally just had the language name; "Mandarin", "Japanese", "Ancient Greek". I thought an easy thing to make it nicer would be to have the flag associated with the language -- PRC flag for Mandarin, Japanese flag for Japanese, etc. What do do for Ancient Greek? Well, let me see it looks and then maybe I can figure something out.

So I asked Claude what I wanted. As expected, it put the PRC and Japanese flags for the first two languages. I expected it to just put a modern Greek flag, or a question mark, or some other gibberish. But it put an emoji of a building with classical Greek columns (), which is absolutely perfect.

My language learning system is unusual; so without context, Claud assumes I'm making something like what already exists -- Duolingo or Anki or something. So I invested some time creating a document that lays out in detail. Now when I include that file as a context, Claude seems to genuinely understand what I'm trying to accomplish in a way it didn't before; and often comes up with creative new use cases. For example, at some point I was having it try to summarize some marketing copy for the website; in a section on educational institutions, it added a bullet point for how it could be used that I'd never thought of.

The fact that they can't learn things on-line, that they have context rot, that there's still a high amount of variance in their output -- all of these, it seems to me, undermine their ability to do things, similar to the way some people's ADHD undermines their ability to excel. But it seems to me the spark of thinking and of creativity is there.

EDIT: Apparently HN doesn't like the emojis. Here's a link to the classical building emoji: https://www.compart.com/en/unicode/U+1F3DB

3 days ago | parent | prev [-]
[deleted]
npinsker 3 days ago | parent | prev [-]

That’s a great answer to GP’s question!

DavidPiper 3 days ago | parent [-]

It's also nonsense. (Swimming and thinking are both human capabilities, not solutions to problems.)

But of course here we are back in the endless semantic debate about what "thinking" is, exactly to the GP's (and Edsger Dijkstra's) point.

handfuloflight 3 days ago | parent [-]

Swimming and thinking being 'human capabilities' doesn't preclude them from also being solutions to evolutionary problems: aquatic locomotion and adaptive problem solving, respectively.

And pointing out that we're in a 'semantic debate' while simultaneously insisting on your own semantic framework (capabilities vs solutions) is exactly the move you're critiquing.

DavidPiper 3 days ago | parent [-]

> And pointing out that we're in a 'semantic debate' while simultaneously insisting on your own semantic framework (capabilities vs solutions) is exactly the move you're critiquing.

I know, that's the point I'm making.