▲ | hamdingers 4 days ago | ||||||||||||||||||||||||||||
I feel the opposite. Before I can use information from a model's "internal" knowledge I have to engage in independent research to verify that it's not a hallucination. Having an LLM generate search strings and then summarize the results does that research up front and automatically, I need only click the sources to verify. Kagi Assistant does this really well. | |||||||||||||||||||||||||||||
▲ | beefnugs 4 days ago | parent [-] | ||||||||||||||||||||||||||||
So does anyone have any good examples of it effectively avoiding the blogspam and SEO? Or being fooled by it? How often either way? | |||||||||||||||||||||||||||||
|