Remix.run Logo
stantonius 17 hours ago

This happened to me too in an experimental project where I was testing how far the model could go on its own. Despite making progress, I can't bare to look at the thing now. I don't even know what questions to ask the AI to get back into it, I'm so disconnected from it. Its exhausting to think about getting back into it; id rather just start from scratch.

The fascinating thing was how easy it was to lose control. I would set up the project with strict rules, md files and tell myself to stay fully engaged, but out of nowhere I slid into compulsive accept mode, or worse told the model to blatantly ignore my own rules I set out. I knew better, but yet it happened over and over. Ironically, it was as if my context window was so full of "successes" I forgot my own rules; I reward-hacked myself.

Maybe it just takes practice and better tooling and guardrails. And maybe this is the growing pains of a new programmers mindset. But left me a little shy to try full delegation any time soon, certainly not without a complete reset on how to approach it.

parpfish 17 hours ago | parent [-]

I’ll chime in to say that this happened to me as well.

My project would start good, but eventually end up in a state where nothing could be fixed and the agent would burn tokens going in circles to fix little bugs.

So I’d tell the agent to come up with a comprehensive refactoring plan that would allow the issues to be recast in more favorable terms.

I’d burn a ton of tokens to refactor, little bugs would get fixed, but it’d inevitably end up going in circles on something new.

danabramov 16 hours ago | parent [-]

Curious if you have thoughts on the second half of the post? That’s exactly what the author is suggesting a strategy for.

majormajor 10 hours ago | parent [-]

"Test the tests" is a big ask for many complex software projects.

Most human-driven coding + testing takes heavy advantage of being white-box testing.

For open-ended complex-systems development turning everything into black-box testing is hard. The LLMs, as noted in the post, are good at trying a lot of shit and inadvertently discovering stuff that passes incomplete tests without fully working. Or if you're in straight-up yolo mode, fucking up your test because it misunderstood the assignment, my personal favorite.

We already know it's very hard to have exhaustive coverage for unexpected input edge cases, for instance. The stuff of a million security bugs.

So as the combinatorial surface of "all possible actions that can be taken in the system in all possible orders" increases because you build more stuff into your system, so does the difficulty of relying on LLMs looping over prompts until tests go green.