| ▲ | holtkam2 2 hours ago | |
IDK if the author's 'metacognition' needs to be a feature of the LLM itself. I could imagine a harness that 1) reads LLM output 2) uses a research sub-agent to attempt to verify any factual claims 2) rephrase the main agent's output such that it conveys uncertainty if the factual claim cannot be independently verified | ||