▲ | peterashford 3 days ago | |
We have a model for what intelligence is - what humans do. If we produce a human-like AI I think we'll agree it's intelligent. The fact that there are degrees of intelligence (dogs > flies) isn't that big of an issue, imo. It's the logically night is day argument - just because we can't point to a clear cut off point between these concepts, doesn't mean they aren't distinct concepts. So it follows with intelligence. It doesn't require consensus, just the same way that "is it night now?" doesn't require consensus | ||
▲ | ggm 3 days ago | parent | next [-] | |
> I think we'll agree it's intelligent. If there's one thing I've found never came true for me, it's almost any sentence of substantive opinion about "philosophy" which starts with "I think we'll agree" And I do think this AI/AGI question is a philosophy question. I don't know if you'll agree with that. Whilst your analogy has strong elements of "consensus not required" I am less sure that applies right now, to what we think about AI/AGI. I think consensus is pretty .. important, and also, absent. | ||
▲ | shkkmo 2 days ago | parent | prev [-] | |
> We have a model for what intelligence is - what humans do. At what point does a human become intelligent? Is a 12 cell embryon intelligent? Is a newborn intelligent? Is a 1 year old intelligent? > It's the logically night is day argument - just because we can't point to a clear cut off point Um...what? There may be more than one of them, but precise definitions exist for the transitions between day and night. I think that is a very poor analogy to intelligence. There are not just degrees of intelligence but different kinds. It is easier for us to understand and evaluate intelligence that is more similar ours and it becomes increasingly harder the more alien it becomes. Given that, I don't see how you could reject that assertion that LLMs have some kind of intelligence. Asking |