▲ | rgoulter 17 hours ago | ||||||||||||||||||||||
A couple of typical comments about LLMs would be: "This LLM is able to capably output useful snippets of code for Python. That's useful." and "I tried to get an LLM to perform a niche task with a niche language, it performed terribly." I think the right synthesis is that there are some tasks the LLMs are useful at, some which they're not useful at; practically, it's useful to be able to know what they're useful for. Or, if we trust that LLMs are useful for all tasks, then it's practically useful to know what they're not good at. | |||||||||||||||||||||||
▲ | ygritte 17 hours ago | parent | next [-] | ||||||||||||||||||||||
Even if that's true, they are still not reliable. The same question can produce different answers each time. | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | imiric 16 hours ago | parent | prev [-] | ||||||||||||||||||||||
> Or, if we trust that LLMs are useful for all tasks, then it's practically useful to know what they're not good at. The thing is that there's no way to objectively measure this. Benchmarks are often gamed, and like a sibling comment mentioned, the output is not stable. Also, everyone has different criteria for what constitutes "good". To someone with little to no programming experience, LLMs would feel downright magical. Experienced programmers, or any domain expert for that matter, would be able to gauge the output quality much more accurately. Even among the experienced group, there are different levels of quality criteria. Some might be fine with overlooking certain issues, or not bother checking the output at all, while others have much higher standards of quality. The problem is when any issues that are pointed out are blamed on the user, instead of the tool. Or even worse: when the issues are acknowledged, but are excused as "this is the way these tools work."[1,2]. It's blatant gaslighting that AI companies love to promote for obvious reasons. | |||||||||||||||||||||||
|