| ▲ | hatmanstack 12 hours ago | |
Seems crazy to me people aren't already including rules to prevent useless language in their system/project lvl CLAUDE.md. As far as redundancy...it's quite useful according to recent research. Pulled from Gemini 3.1 "two main paradigms: generating redundant reasoning paths (self-consistency) and aggregating outputs from redundant models (ensembling)." Both have fresh papers written about their benefits. | ||
| ▲ | wongarsu 5 hours ago | parent | next [-] | |
There was also that one paper that had very noticeable benchmark improvements in non-thinking models by just writing the prompt twice. The same paper remarked how thinking models often repeat the relevant parts of the prompt, achieving the same effect. Claude is already pretty light on flourishes in its answers, at least compared to most other SotA models. And for everything else it's not at all obvious to me which parts are useless. And benchmarking it is hard (as evidenced by this thread). I'd rather spend my time on something else | ||
| ▲ | whattheheckheck 10 hours ago | parent | prev [-] | |
No such thing as junk DNA kinda applies here | ||