| ▲ | QuadrupleA a day ago | |||||||
One thing I haven't seen mentioned much, in AI coding and other AI-assisted work, is the sheer needless verbosity of models, the walls of text they spew out for us to read through. This alone adds to the workload & fatigue. There's a thing in writing, "pity the reader" - respect your audience's time, get to the point. In The Elements of Style, "omit needless words." You can prompt models to be succinct, but the latest ones - GPT 5-series especially - ignore your requests and spew paragraphs upon paragraphs of noise. Maybe it's the incentives of charging per token? If you want, I can expand on this topic and generate a lengthy comparison chart. | ||||||||
| ▲ | dag100 a day ago | parent [-] | |||||||
This is basically a violation of the robustness principle ("be liberal in what you accept, be conservative in what you produce"), but I doubt there will be much improvement on this front seeing as tokens are fed back into the model. A succinct phrase is a compressed form of a longer sentence that expresses the same idea, so from the perspective of having to feed the model's output back into it, more tokens presumably work better by providing a greater of surface area for processing, so to speak. This is just my intuition, however. | ||||||||
| ||||||||