▲ | WolfeReader 2 days ago | ||||||||||||||||
"Here's a technology which is known to be confidently wrong pretty frequently. I'm going to use it to fact check things." | |||||||||||||||||
▲ | smallnix 2 days ago | parent | next [-] | ||||||||||||||||
Querying an LLM for 'facts' is dangerous. Using some IR technique and incorporating LLMs to gauge relevancy and semantic alignment is a viable approach. | |||||||||||||||||
▲ | sleepybrett 2 days ago | parent | prev | next [-] | ||||||||||||||||
pay all the other 'ai's to crowdsource .. or maybe cloudsource, a truth boolean. Then when they all ingest each others answers slowly over time all the answers become the same, wether truthful or not. | |||||||||||||||||
▲ | lupusreal 2 days ago | parent | prev | next [-] | ||||||||||||||||
If you have tokens to burn, using new sessions to critique the work produced in other sessions greatly improves reliability. Asking the same question multiple different ways, and to more than one LLM, also helps a lot. | |||||||||||||||||
| |||||||||||||||||
▲ | CamperBob2 2 days ago | parent | prev [-] | ||||||||||||||||
It's a powerful tool that can be misused by the incompetent, like most other powerful tools. |