Remix.run Logo
WolfeReader 2 days ago

"Here's a technology which is known to be confidently wrong pretty frequently. I'm going to use it to fact check things."

smallnix 2 days ago | parent | next [-]

Querying an LLM for 'facts' is dangerous. Using some IR technique and incorporating LLMs to gauge relevancy and semantic alignment is a viable approach.

sleepybrett 2 days ago | parent | prev | next [-]

pay all the other 'ai's to crowdsource .. or maybe cloudsource, a truth boolean. Then when they all ingest each others answers slowly over time all the answers become the same, wether truthful or not.

lupusreal 2 days ago | parent | prev | next [-]

If you have tokens to burn, using new sessions to critique the work produced in other sessions greatly improves reliability. Asking the same question multiple different ways, and to more than one LLM, also helps a lot.

pipo234 2 days ago | parent [-]

That approach may be a viable heuristic but it will only get you so far. It's like flagging an opinion because it doesn't rhyme with the opinions of others.

That's not what humans do when they are fact finding, though. It's not what a (proper) scientist would do if she/he discovered a great insight or theory and was wondering whether it was true.

lupusreal 2 days ago | parent [-]

If you're turning to LLMs for great insight or theory, you're definitely doing it wrong. These tools are for well trod terrain, and when they're really just making shit up, it's almost always different shit each time. So yes, it's a heuristic, but for dealing with the stochastic weirdness these models sometimes spit out it works pretty damn well.

CamperBob2 2 days ago | parent | prev [-]

It's a powerful tool that can be misused by the incompetent, like most other powerful tools.