Remix.run Logo
abraxas 2 days ago

I tried to follow the same pattern on a backend project written in Python/FastAPI and this has been mostly a heartache. It gets kind of close but then it seems to periodically go off the rails, lose its mind and write utter shit. Like braindead code that has no chance of working.

I don't know if this is a question of the language or what but I just have no good luck with its consistency. And I did invest time into defining various CLAUDE.md files. To no avail.

ryandrake 2 days ago | parent | next [-]

What I find helpful in a large project is whenever Claude goes way off the rails, I correct it, and then tell it to update CLAUDE.md with instructions in its own words how to not do it again in the future. It doesn't stop the initial hallucinations and brainfarts, but it seems to be making the tool slowly better as it adds context for itself.

lordnacho 2 days ago | parent | prev | next [-]

Has this got anything to do with using a stronger typed language? I've heard that reported, not sure whether it's true since my python scripts tend to be short.

Does it end in a forever loop for you? I used to have this problem with other models.

adastra22 2 days ago | parent [-]

I also use Rust with Claude Code, like GP. I do not experience forever loops — Claude converges on a working compiling solution every time. Sometimes the solution is garbage, and many times it gets it to “work” by disabling the test. I have layers of scaffolding (critic agents) that prevent this from being something I have to deal with, most of the time.

But yeah, strongly typed languages, test driven development, and good high quality compiler errors are real game changers for LLM performance. I use Rust for everything now.

wg0 2 days ago | parent | prev [-]

I can second that. Even on plain CRUD with little to no domain logic.