| ▲ | ericpauley 5 hours ago | |
Sure LLMs make mistakes, but have you looked at the accuracy of the average top search results recently? The SERPs are packed with SEO-infested articles that are all written by LLMs anyway (and almost universally worse ones than you could use yourself). In many cases the stakes are low enough (and the cost of manually sifting through the junk high enough) that it’s worth going with the empirically higher quality answer than the SEO spam. This of course doesn’t apply to high-stakes settings. In these cases I find LLMs are still a great information retrieval approach, but it’s a starting point to manual vetting. | ||
| ▲ | jesterson 5 hours ago | parent [-] | |
[dead] | ||