| ▲ | giwook 4 hours ago | ||||||||||||||||
I wonder how much of this is simply needing to adapt one's workflows to models as they evolve and how much of this is actual degradation of the model, whether it's due to a version change or it's at the inference level. Also, everyone has a different workflow. I can't say that I've noticed a meaningful change in Claude Code quality in a project I've been working on for a while now. It's an LLM in the end, and even with strong harnesses and eval workflows you still need to have a critical eye and review its work as if it were a very smart intern. Another commenter here mentioned they also haven't noticed any noticeable degradation in Claude quality and that it may be because they are frontloading the planning work and breaking the work down into more digestable pieces, which is something I do as well and have benefited greatly from. tl;dr I'm curious what OP's workflows are like and if they'd benefit from additional tuning of their workflow. | |||||||||||||||||
| ▲ | 8note 4 hours ago | parent [-] | ||||||||||||||||
I've noticed a strong degradation as its started doing more skill like things and writing more one off python scripts rather than using tools. the agent has a set of scripts that are well tested, but instead it chooses to write a new bespoke script everytime it needs to do something, and as a result writes both the same bugs over and over again, and also unique new bugs every time as well. | |||||||||||||||||
| |||||||||||||||||