▲ | softwaredoug 11 hours ago | |
In my experience reasoning and search come with their own set of tradeoffs. It works great when it works. But the variance can be wider than just an LLM. Search and reasoning use up more context, leading to context rot, and subtler harder to detect hallucinations. Reasoning doesn’t always focus on evaluating the quality of evidence, just “problem solving” from some root set of axioms found in search. I’ve had this happen in Claude code for example where it hallucinated a few details about a library based on what badly written forum post. |