| ▲ | Hallucinations Undermine Trust; Metacognition Is a Way Forward(arxiv.org) | |||||||
| 10 points by gmays 5 hours ago | 4 comments | ||||||||
| ▲ | spacebacon 44 minutes ago | parent | next [-] | |||||||
Related: https://github.com/space-bacon/SRT This repository empirically proves computational semiotics. The only “metacognitive” (2nd order) and metapragmatic (3rd order) model I’m aware of. | ||||||||
| ▲ | holtkam2 an hour ago | parent | prev | next [-] | |||||||
IDK if the author's 'metacognition' needs to be a feature of the LLM itself. I could imagine a harness that 1) reads LLM output 2) uses a research sub-agent to attempt to verify any factual claims 2) rephrase the main agent's output such that it conveys uncertainty if the factual claim cannot be independently verified | ||||||||
| ▲ | ryandvm 2 hours ago | parent | prev [-] | |||||||
Unproductive tangent: Why do we call it "hallucinationing" instead of "bullshitting" when that is so clearly what it is? If I'm talking to a guy that says, "I have a really fast metabolism, that's why I can eat whatever I want", he's not hallucinating - he's full of shit. | ||||||||
| ||||||||