| ▲ | grey-area 3 hours ago | |
It’s just not intelligent or reasoning, and this sort of question exposes that more clearly. Surely anyone who has used these tools is familiar with the sometimes insane things they try to do (deleting tests, incorrect code, changing the wrong files etc etc). They get amazingly far by predicting the most likely response and having a large corpus but it has become very clear that this approach has significant limitations and is not general AI, nor in my view will it lead to it. There is no model of the world here but rather a model of words in the corpus - for many simple tasks that have been documented that is enough but it is not reasoning. I don’t really understand why this is so hard to accept. | ||
| ▲ | raddan 40 minutes ago | parent | next [-] | |
> I don’t really understand why this is so hard to accept. I struggle with the same question. My current hypothesis is a kind of wishful thinking: people want to believe that the future is here. Combined with the fact that humans tend to anthropomorphize just about everything, it’s just a really good story that people can’t let go of. People behave similarly with respect to their pets, despite, eg, lots of evidence that the mental state of one’s dog is nothing like that of a human. | ||
| ▲ | fauigerzigerk 2 hours ago | parent | prev [-] | |
I agree completely. I'm tempted to call it a clear falsification of any "reasoning" claim that some of these models have in their name. But I think it's possible that there is an early cost optimisation step that prevents a short and seemingly simple question even getting passed through to the system's reasoning machinery. However, I haven't read anything on current model architectures suggesting that their so called "reasoning" is anything other than more elaborate pattern matching. So these errors would still happen but perhaps not quite as egregiously. | ||