| In my view, absolutely yes. Thinking is a means to an end. It's about acting upon these motivations by abstracting, recollecting past experiences, planning, exploring, innovating. Without any motivation, there is nothing novel about the process. It really is just statistical approximation, "learning" at best, but definitely not "thinking". |
| |
| ▲ | chpatrick 5 days ago | parent [-] | | Again the problem is that what "thinking" is totally vague. To me if I can ask a computer a difficult question it hasn't seen before and it can give a correct answer, it's thinking. I don't need it to have a full and colorful human life to do that. | | |
| ▲ | barnacs 5 days ago | parent [-] | | But it's only able to answer the question because it has been trained on all text in existence written by humans, precisely with the purpose to mimic human language use. It is the humans that produced the training data and then provided feedback in the form of reinforcement that did all the "thinking". Even if it can extrapolate to some degree (altough that's where "hallucinations" tend to become obvious), it could never, for example, invent a game like chess or a social construct like a legal system. Those require motivations like "boredom", "being social", having a "need for safety". | | |
| ▲ | chpatrick 5 days ago | parent [-] | | Humans are also trained on data made by humans. > it could never, for example, invent a game like chess or a social construct like a legal system. Those require motivations like "boredom", "being social", having a "need for safety". That's creativity which is a different question from thinking. | | |
| ▲ | bluefirebrand 4 days ago | parent | next [-] | | > Humans are also trained on data made by humans Humans invent new data, humans observe things and create new data. That's where all the stuff the LLMs are trained on came from. > That's creativity which is a different question from thinking It's not really though. The process is the same or similar enough don't you think? | | |
| ▲ | chpatrick 4 days ago | parent [-] | | I disagree. Creativity is coming up with something out of the blue. Thinking is using what you know to come to a logical conclusion. LLMs so far are not very good at the former but getting pretty damn good at the latter. | | |
| ▲ | barnacs 4 days ago | parent | next [-] | | > Thinking is using what you know to come to a logical conclusion What LLMs do is using what they have _seen_ to come to a _statistical_ conclusion. Just like a complex statistical weather forecasting model. I have never heard anyone argue that such models would "know" about weather phenomena and reason about the implications to come to a "logical" conclusion. | | |
| ▲ | chpatrick 4 days ago | parent [-] | | I think people misunderstand when they see that it's a "statistical model". That just means that out of a range of possible answers, it picks in a humanlike way. If the logical answer is the humanlike thing to say then it will be more likely to sample it. In the same way a human might produce a range of answers to the same question, so humans are also drawing from a theoretical statistical distribution when you talk to them. It's just a mathematical way to describe an agent, whether it's an LLM or human. |
| |
| ▲ | bluefirebrand 4 days ago | parent | prev [-] | | I dunno man if you can't see how creativity and thinking are inextricably linked I don't know what to tell you LLMs aren't good at either, imo. They are rote regurgitation machines, or at best they mildly remix the data they have in a way that might be useful They don't actually have any intelligence or skills to be creative or logical though | | |
| ▲ | chpatrick 4 days ago | parent [-] | | They're linked but they're very different. Speaking from personal experience, It's a whole different task to solve an engineering problem that's been assigned to you where you need to break it down and reason your way to a solution, vs. coming up with something brand new like a song or a piece of art where there's no guidance. It's just a very different use of your brain. |
|
|
| |
| ▲ | barnacs 4 days ago | parent | prev [-] | | I guess our definition of "thinking" is just very different. Yes, humans are also capable of learning in a similar fashion and imitating, even extrapolating from a learned function. But I wouldn't call that intelligent, thinking behavior, even if performed by a human. But no human would ever perform like that, without trying to intuitively understand the motivations of the humans they learned from, and naturally intermingling the performance with their own motivations. |
|
|
|
|