▲ | simianwords a day ago | |
> But that still leaves a crucial question: can we develop a more precise, less anthropomorphic vocabulary to describe AI capabilities? Or is our human-centric language the only tool we have to reason about these new forms of intelligence, with all the baggage that entails? I don't get the problem with this really. I think LLM's "reasoning" is a very fair and proper way to call it. It takes time and spits out tokens that it recursively uses to get a much better output than it otherwise would have. Is it actually really reasoning using a brain like a human would? No. But it is close enough so I don't see the problem calling it "reasoning". What's the fuss about? | ||
▲ | keiferski a day ago | parent | next [-] | |
Are swimming and sailing the same, because they both have the result of moving through the water? I'd say, no, they aren't, and there is value in understanding the different processes (and labeling them as such), even if they have outputs that look similar/identical. | ||
▲ | tim333 17 hours ago | parent | prev | next [-] | |
The problem is fuzzy language can make debate poor and about the definition of words rather than about reality. The answer I think it to avoid that and find things that you can be clear about. A famous example is the Turing test. Rather than debates on whether machines can think getting bogged down in endless variation of how people define thinking, Turing looked at if the machines could be told apart from humans which he discussed in his paper. | ||
▲ | iLoveOncall a day ago | parent | prev | next [-] | |
It has absolutely nothing to do with reasoning, and I don't understand how anyone could think it's"close enough". Reasoning models are simply answering the same question twice with a different system prompt. It's a normal LLM with an extra technical step. Nothing else. | ||
▲ | draw_down a day ago | parent | prev [-] | |
[dead] |