| ▲ | tailscaler2026 6 hours ago | |||||||
Of course it writes a lot of code. It gets paid per token. That's guaranteed future income every additional line of technical debt. | ||||||||
| ▲ | HoldOnAMinute 6 hours ago | parent | next [-] | |||||||
Periodically you can also ask it to review the recent changes and see if there is a risk-free way to streamline them. You can also tell it to periodically summarize the "lessons learned" from the recent session(s) | ||||||||
| ▲ | embedding-shape 6 hours ago | parent | prev | next [-] | |||||||
Then local models shouldn't suffer from the same problems, but they do. They just aren't trained in the direction of "less code == better long-term maintainability" I'd say, rather than some grand "increased-token-usage" conspiracy. You can certainly steer them a bit to reduce the issue parent talks about, but they still go into that direction whenever they can, adding stuff on top of stuff, piling hacks/shim on top of other hacks/shims, just like many human developers :) | ||||||||
| ||||||||
| ▲ | layer8 5 hours ago | parent | prev | next [-] | |||||||
At some point they’ll introduce “deletion” tokens that cost ten times the regular token price. ;) | ||||||||
| ▲ | enraged_camel 5 hours ago | parent | prev [-] | |||||||
>> Of course it writes a lot of code. It gets paid per token. I don't buy it. I think a much more likely reason it leans towards adding code is because deleting code carries inherent risk: it can break things in major ways or minor ways or very visibly or invisibly. Adding new code, on the other hand, is a lot safer: the only parts that can break are those the AI touched inside its own working context. So it doesn't have to go down rabbit holes and potentially create bigger and bigger messes. | ||||||||