▲ | skywhopper 6 days ago | |
I mostly agree with the point the author makes that "it doesn't matter". But then again, it does matter, because LLM-based products are marketed based on "IT CAN REASON!" And so, while it may not matter, per se, how an LLM comes up with its results, to the extent that people choose to rely on LLMs because of marketing pitches, it's worth pushing back on those claims if they are overblown, using the same frame that the marketers use. That said, this author says this question of whether models "can reason" is the least interesting thing to ask. But I think the least interesting thing you can do is to go around taking every complaint about LLM performance and saying "but humans do the exact same thing!" Which is often not true, but again, doesn't matter. |