Remix.run Logo
rich_sasha 13 hours ago

I often wonder what I am missing. Recently I wanted to wrap a low level vendor API with a callback API (make a request struct and request id, submit, provide a callback fn, which gets called with request IDs and messages received from vendor) to async Python (await make_request(...)). Kinda straightforward - lots of careful code of registering and unregistering callbacks, some careful thread synchronisation (callbacks get called in another thread), thinking about sane exception handling in async code. Fiddly but not rocket science.

What I got sort of works, as in tests pass - this with Opus 4.5. It is usable, though it doesn't exist cleanly on errors despite working to death with Claude about this. On exception it exits dirtily and crashes, which is good enough for now. I had some fancy ideas about logging messages from the vendor to be able to replay them, to be able to then reproduce errors. Opus made a real hash of it, lots of "fuck it comment out the assert so the test passes". This part is unusable and worse, pollutes the working part of the project. It made a valiant effort at mocking the vendor API for testing but really badly, instead of writing 30 lines of general code, it wrote 200 lines of inconsistent special cases that don't even work altogether. Asked to fix it it just shuffles around the special cases and gets stuck.

It's written messily enough that I wouldn't touch this even to remove the dead code paths. I could block a few days for it to fix but frankly in that time I can redo it all and better. So while it works I'm not gonna touch it.

I did everything LLM proponents say. I discussed requirements. Agent had access to the API docs and vendor samples. I said think hard many times. Based on this we wrote a detailed spec, then detailed inplementation plan. I hand checked a lot of the high level definitions. And yet here I am. By the time Opus went away and started coding, we had the user facing API hammered out, key implementation details (callback -> queue -> async task in source thread routing messages etc), constraints (clean handling of exceptions, threadsafe etc). Tests it has to write. Any minor detail we didn't discuss to death was coded up like a bored junior.

And this also wasn't my first attempt, this was attempt #3. First attempt was like, here's the docs and samples, make me a Python async API. That was a disaster. Second was more like, let's discuss, make a spec, then off you go. No good. Even just taking the last attempt time, I would have spent less time doing this by hand myself from scratch.

Bewelge 11 hours ago | parent | next [-]

Just a guess, but to me it sounds like you're trying to do too much at once. When trying something like this:

> lots of careful code of registering and unregistering callbacks, some careful thread synchronisation (callbacks get called in another thread), thinking about sane exception handling in async code. Fiddly but not rocket science.

I'd expect CC to fail this when just given requirements. The way I use it is to explicitly tell it things like: "Make sure to do Y when callback X gets fired" and not "you have to be careful about thread synchronisation". "Do X, so that Exceptions are always thrown when Y happens" instead of "Make sure to implement sane Exception handling". I think you have to get a feeling for how explicit you have to get because it definitely can figure out some complexity by itself.

But honestly it's also requires a different way of thinking and working. It reminds me of my dad reminiscing that the skill of dictating isn't used at all anymore nowadays. Since computers, typing, or more specifically correcting what has been typed has become cheap. And the skill of being able to formulate a sentence "on the first try" is less valuable. I see some (inverse) parallel to working with AI vs writing the code yourself. When coding yourself you don't have to explicitly formulate everything you are doing. Even if you are writing code with great documentation, there's no way that it could contain all of the tacit knowledge you as the author have. At least that's how I feel working with it. I just got really started with Claude Code 2 months ago and for a greenfield project I am amazed how much I could get done. For existing, sometimes messy side projects it works a lot worse. But that's also because it's more difficult to describe explicitly what you want.

rich_sasha 10 hours ago | parent [-]

> The way I use it is to explicitly tell it things like: "Make sure to do Y when callback X gets fired" and not "you have to be careful about thread synchronisation". "Do X, so that Exceptions are always thrown when Y happens" instead of "Make sure to implement sane Exception handling".

At this point I'm basically programming in English, no? Trying to squeeze exact instructions into an inherently ambiguous representation. I might as well write code at this point, if this is the level of detail required. For this to work, I need to be able to say "make this thread-safe", maybe "by using a queue". Not explaining which synchronisation primitive to use in every last piece of the code.

This is my point actually. If I describe the task to accuracy level X, it still doesn't seem to work. To make it work, perhaps I need to describe it to level Y>X, but that for now takes me more time than to do it myself.

There's lots of variables here, how fast I am at writing code or planning structure, how close to spec the things needs to be, etc. My first "vibe code" was a personal productivity app in Claude Code, in Flutter (task timing). I have 0 idea about Dash or Flutter or any web stuff, and yet it made a complete app that did some stuff, worked on my phone, with a nice GUI, all from just a spec. From scratch, it would take me weeks.

...though in the end, even after 3 attempts, the final thing still didn't actually work well enough to be useful. The timer would sometimes get stuck or crash back down to 0, and froze when the app was minimised.

Bewelge 9 hours ago | parent [-]

> At this point I'm basically programming in English, no?

Yea, except they can handle some degree of complexity. Its usefulness obviously really depends on that degree. And I'm sure there are still a lot of domains and types of software where that tradeoff between doing it yourself or spelling it out isn't worth it.

Izkata 11 hours ago | parent | prev | next [-]

Based on what I've seen and heard, you have the happy path working and that's what the pro-AI people are describing with huge speedups. Figuring out and fixing the edge cases and failure modes is getting pushed into the review stage or even users, so it doesn't count towards the development time. It can even count as more speed if it generates more cases that get handled quickly.

rich_sasha 10 hours ago | parent [-]

I'm not sure I agree with this approach, or at least it doesn't work in my area. It's like self driving cars. Having 90% reliability is almost as good as 0%. I have to be confident the thing is gonna work, correctly, or at worst fail predictably.

I can see that there's a lot of applications where things can just randomly fail and you retry / restart, that helps with crashes.

But the AI can't make it not crash, what's to say it does the right thing when it succeeds? Again, depends on the relative cost of errors etc.

stiiv 13 hours ago | parent | prev [-]

> On exception it exits dirtily and crashes, which is good enough for now

Silent failures and unexplained crashes are high on my list of things to avoid, but many teams just take them for granted in spite of the practical impact.

I think that a lot of orgs have a culture of "ship it and move on," accompanied by expectations like: QA will catch it, high turnover/lower-skill programmers commit stuff like this all the time anyway, or production code is expected to have some rough edges. I've been on teams like that, mostly in bigger orgs with high turnover and/or low engineering standards.