| ▲ | pessimizer 3 hours ago | |
> Try implementing something that is too hard for you. This is almost the only thing I'm against when it comes to LLMs. You have no ability to figure out if it is right, and you will be overly impressed by garbage because you aren't qualified to judge. Has anybody come up with a pithy way to describe Dunning-Kruger for evaluating the output of LLMs, or are people too busy still denying that Dunning and Kruger noticed anything? When it comes to implementing math, the main problem is that the tiniest difference can make the entire thing wrong, often to the degree of inverting it. I wouldn't be in any way comfortable in shipping something I didn't even understand. The LLM certainly didn't understand it; somebody should. | ||