▲ | shepherdjerred 5 days ago | |||||||||||||
You can prevent quite a lot of these issues if you write rules for Cursor or your preferred IDE Linters can also help quite a bit. In the end, you either have your rules enforced programmatically or by a human in review. I think it’s a very different (and so far, for me, uncomfortable) way of working, but I think there can be benefits especially as tooling improves | ||||||||||||||
▲ | sshine 5 days ago | parent | next [-] | |||||||||||||
It seems like people who use AI for coding need to reinvent a lot of the same basic principles of software engineering before they gradually propagate into the mainstream agentic frameworks. Coding agents come with a lot of good behavior built in. Like "planning mode" where they create a strong picture of what's to be made before touching files. This has honestly improved my workflow at programming from wanting to jump into prototyping before I even have a clear idea, to being very spec-oriented: Of course there needs to be a plan, especially when it will be drafted for me in seconds. But the amount of preventable dumb things coding agents will do that need to be explicitly stated and meticulously repeated in their contexts reveals how simply training on the world's knowledge does not capture senior software engineer workflows entirely, and captures a lot of human averageness that is frowned upon. | ||||||||||||||
| ||||||||||||||
▲ | cardanome 5 days ago | parent | prev [-] | |||||||||||||
Do those rules really work? I have added the rule to not not add comments and I still have to constantly remind the model to not add comments despite of it. | ||||||||||||||
|