| ▲ | strictnein 5 days ago |
| I'm on the $200 / month account and its also slower than a few weeks ago. And struggling more and more. I used to think of it as a decent sr dev working alongside me. Not it feels like an untrained intern that takes 4-5 shots to get things right. Hallucinated tables, columns, and HTML templates are its new favorite thing. And calling things "done" that aren't even half done and don't work in the slightest. |
|
| ▲ | brookst 5 days ago | parent | next [-] |
| Same plan, same experience. Trying to get it to develop and execute tests and it frequently modifies the test to succeed even if the libraries it calls fail, and then explains that it’s doing so because the test itself works but the underlying app has errors. Yes, I know. That’s what the test was for. |
| |
| ▲ | zarzavat 5 days ago | parent [-] | | Anthropic, if you're listening, please allow zoned access enforcement within files. I want to be able to say "this section of the file is for testing", delineated by comments, and forbid Claude from editing it without permission. My fear when using Claude is that it will change a test and I won't notice. Splitting tests into different files works but it's often not feasible, e.g. if I want to write unit tests for a symbol that is not exported. | | |
| ▲ | blyat 4 days ago | parent | next [-] | | I've had some middling success with this by utilizing CLAUDE.md and language features. Two approaches in C#: 1) use partial classes and create a 'rule' in CLAUDE.md to never touch named files, e.g. User.cs (edits allowed) User.Protected.cs (not allowed by convention) and 2) a no-AI-allowed attribute, e.g. [DontModifyThisClassOrAttributeOrMethodOrWhatever] and instructions to never modify said target. Can be much more granular and Claude Code seems to respect it. | |
| ▲ | geeunits 5 days ago | parent | prev [-] | | Does already, read the docs | | |
| ▲ | boie0025 5 days ago | parent [-] | | I think a link would have been far more helpful than "RTFM". Especially for those of us reading this exchange outside of the line of fire. | | |
| ▲ | geeunits 4 days ago | parent [-] | | Don't put the onus (Opus!) on me! Just a dad approach to helping. If there's enough time to writ prose about the problem you could at least rtfm first! | | |
| ▲ | simonw 4 days ago | parent [-] | | If you know something is covered by the documentation it's useful to provide a link, especially if that documentation is difficult to find. (I couldn't find that documentation when I went looking just now.) | | |
|
|
|
|
|
|
| ▲ | keyle 5 days ago | parent | prev | next [-] |
| There must be a term coined for AI degradation... At least with local LLM, it's crap, but it's consistent crap! |
| |
| ▲ | beefnugs 5 days ago | parent [-] | | Dynamic spurious profit probing. See how many users N times their usage without giving up forever. They have to do something because you can't really fist advertisements into an api | | |
| ▲ | dmix 5 days ago | parent [-] | | OP is paying $200/m and anthropic is very much in the hyper funded growth stage. I very much doubt they are going accountant mode on it yet Likely the common young startup issues: a mix of scaling issues and poorly implemented changes. Improve one thing, make other stuff worse etc | | |
| ▲ | jazzyjackson 5 days ago | parent [-] | | Probably not accountant mode but haven't they always had daily quotas that get used up? Like they don't want everyone hitting the service nonstop because they don't have enough GPUs to run inference at peak times of day? So it could be a matter of serving more highly quantized model because giving bad results has higher user retention than "try again later" |
|
|
|
|
| ▲ | cyanydeez 5 days ago | parent | prev [-] |
| Gotta assume theyre reducing overall compute with smaller models cause 200$ aint squat for their investment. |