| ▲ | monsieurbanana 4 hours ago | |
Making them look more accurate is not the same as being more accurate, and llms are pretty good at the former. Imagine a user had a vague idea or something that is broken, then the LLM will choose to interpret his comment for what it thinks is the most likely actual underneath problem, without actually checking anything. | ||