Remix.run Logo
klausa 4 hours ago

The LLM can find material that it would be hard or time-consuming for you to do.

You still need to verify it, but "find the right things to read in the first place" is often a time intensive process in itself.

(You might, at that point, argue that "what if LLM fails to find a key article/paper/whatever", which I think is both a reasonable worry, and an unreasonable standard to apply. "What if your google search doesn't return it" is an obvious counterpoint, and I don't think you can make a reasonable argument that you journalists should be forced to cross-compare SERPs from Google/Bing/DuckDuckGo/AltaVista or whatever.)

madamelic 26 minutes ago | parent [-]

I believe what their point is is that if you give people a "extract-needle-from-haystack" machine and then tell them they have to manually find where in the haystack the needle was, it defeats the purpose of having the machine.

With that said, a good RAG solution would come with metadata to point to where it was sourced from.