| ▲ | Uehreka 3 days ago |
| > I think intelligence That very phrasing belies the problem with the word: There is no consensus on what intelligence is, what a clear test for it would be or whether such a test could even exist. There are only people on the internet with personal theories and opinions. So when people say AI is not intelligent, my next questions are whether rocks, trees, flies, dogs, dolphins, humans and “all humans” are intelligent. The person will answer yes/no immediately in a tone that makes it sound like what they’re saying must be obvious, and yet their answers frequently do not agree with each other. We do not have a consensus definition of intelligence that can be used to include some things and exclude others. |
|
| ▲ | ggm 3 days ago | parent | next [-] |
| Exemplifies why I think Hinton was on dangerous ground. He is normally far more cautious in his use of language. |
|
| ▲ | imiric 2 days ago | parent | prev | next [-] |
| It's frustrating to read this type of response whenever this topic is raised. It does nothing but derail the conversation into absurdism. Yes, we don't have clear definitions of intelligence, just like we don't for life, and many other fundamental concepts. And yet it's possible to discuss these topics within specific contexts based on a generally and colloquially shared definition. As long as we're willing to talk about this in good faith with the intention to arrive at some interesting conclusions, and not try to "win" an argument. So, given this, it is safe to assert that we haven't invented artificial intelligence. We have invented something that mimics it very well, which will be useful to us in many domains, but calling this intelligence is a marketing tactic promoted by people who have something to gain from that narrative. |
| |
| ▲ | Uehreka 2 days ago | parent | next [-] | | > It does nothing but derail the conversation into absurdism. The conversation (about whether AI is “intelligent”) was already absurd, I’m just pointing it out ;) The more important conversation is about whether AI is useful, dangerous, and/or worth it. If AI is competent enough at a task to replace a human for 1/10 the cost, it doesn’t really matter if it “has a mortal soul” or “responds to sensory stimuli” or “can modify its weights in real time”, we need to be talking about what that job loss means for society. That’s my main frustration: that the “is it intelligent” debate devolves into pointless unsettleable philosophical questions and sucks up all the oxygen, and the actual things of consequence go undiscussed. | |
| ▲ | ggm 2 days ago | parent | prev [-] | | I am doing this, because normally Hinton is my go-to for cautious, useful input to a debate. When he makes this kind of sweeping statement, my hackles get up. The rest of the article had nothing I didn't expect. I did NOT expect him to make such a sweeping assertion. They're useful. They're not intelligent. He invited the reproach. |
|
|
| ▲ | peterashford 2 days ago | parent | prev | next [-] |
| We have a model for what intelligence is - what humans do. If we produce a human-like AI I think we'll agree it's intelligent. The fact that there are degrees of intelligence (dogs > flies) isn't that big of an issue, imo. It's the logically night is day argument - just because we can't point to a clear cut off point between these concepts, doesn't mean they aren't distinct concepts. So it follows with intelligence. It doesn't require consensus, just the same way that "is it night now?" doesn't require consensus |
| |
| ▲ | ggm 2 days ago | parent | next [-] | | > I think we'll agree it's intelligent. If there's one thing I've found never came true for me, it's almost any sentence of substantive opinion about "philosophy" which starts with "I think we'll agree" And I do think this AI/AGI question is a philosophy question. I don't know if you'll agree with that. Whilst your analogy has strong elements of "consensus not required" I am less sure that applies right now, to what we think about AI/AGI. I think consensus is pretty .. important, and also, absent. | |
| ▲ | shkkmo 2 days ago | parent | prev [-] | | > We have a model for what intelligence is - what humans do. At what point does a human become intelligent? Is a 12 cell embryon intelligent? Is a newborn intelligent? Is a 1 year old intelligent? > It's the logically night is day argument - just because we can't point to a clear cut off point Um...what? There may be more than one of them, but precise definitions exist for the transitions between day and night. I think that is a very poor analogy to intelligence. There are not just degrees of intelligence but different kinds. It is easier for us to understand and evaluate intelligence that is more similar ours and it becomes increasingly harder the more alien it becomes. Given that, I don't see how you could reject that assertion that LLMs have some kind of intelligence. Asking |
|
|
| ▲ | throw310822 2 days ago | parent | prev [-] |
| The annoying thing is that we already had an operative definition of intelligence that worked perfectly well for seventy years. It's the Turing test. We've only become dissatisfied with it because we don't like the fact that machines pass it. |
| |
| ▲ | imiric 2 days ago | parent [-] | | The Turing test was never meant to measure intelligence, let alone define it. It is an "imitation game" that measures the ability of machines to mimic intelligent behavior enough to fool humans into believing they're interacting with another human, and a thought experiment about the practical implications of that. Machines have arguably been able to do this for decades. This is interesting in its own right, and has propelled the computing industry since it was proposed, but it's not a measurement of intelligence. The reality is that we don't have a good measurement of intelligence, and struggle to define it to begin with. | | |
| ▲ | throw310822 2 days ago | parent [-] | | > The Turing test was never meant to measure intelligence, let alone define it. Original proposal: "I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous [...] Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words." Clearly Turing is saying "we cannot precisely define what thinking means, so let's instead check if we can tell apart a human and a machine when they communicate freely through a terminal". It's not about fooling humans (what would be the point of it?) but about replacing the ambiguous question "can they think" with an operative definition that can be tested unambiguously. What Turing is saying is that a machine that passes the test is "as good as if it were thinking". > Machines have arguably been able to do this for decades. Absolutely not and it's surprisingly uninformed to claim so. | | |
|
|