Remix.run Logo
Effective harnesses for long-running agents(anthropic.com)
37 points by diwank 3 hours ago | 8 comments
daxfohl 9 minutes ago | parent | next [-]

IME a dedicated testing / QA agent sounds nice but doesn't work, for same reasons as AI / human interaction. The more you try to diverge from the original dev agent's approach, the less and less chance there is that the dev agent will get to where you want it to be. Far more frequently it'll get stuck in a loop between two options that are both not what you want.

So adding a QA agent, while it sounds logical, just ends up being even more of this. Rather than converging on a solution, they just get all out of whack. Until that is solved, far better just to have your dev agent be smart about doing its own QA.

The only way I could see the QA agent idea working now is if it had the power to roll back the entire change, reset the dev agent, update the task with some hints of things not to overlook, and trigger the dev process from scratch. But that seems pretty inefficient, and IDK if it would work any better.

roughly an hour ago | parent | prev | next [-]

One of the things that makes it very difficult to have reasonable conversations about what you can do with LLMs is the effort-to-outcome curve is basically exponential - with almost no effort, you can get 70% of the way there. This looks amazing, and so people (mostly executives) look at this and think, “this changes everything!”

The problem is the remaining 30% - the next 10-20% starts to require things like multi-agent judge setups, external memory, context management, and that gets you to something that’s probably working but you sure shouldn’t ship to production. As to the last 10% - I’ve seen agentic workflows with hundreds of different agents, multiple models, and fantastically complex evaluation frameworks to try to reduce the error rates past the ~10% mark. By a certain point, the amount of infrastructure and LLM calls are running into several hundred dollars per run, and you’re still not getting guaranteed reliable output.

If you know what you’re doing and you know where to fit the LLMs (they’re genuinely the best system we’ve ever devised for interpreting and categorizing unstructured human input), they can be immensely useful, but they sing a siren song of simplicity that will lure you to your doom if you believe it.

zephyrthenoble 14 minutes ago | parent | next [-]

Yes, it's essentially the Pareto principle [0]. The LLM community has conflated the 80% as difficult complicated work, when it was essentially boilerplate. Allegedly LLMs have saved us from that drudgery, but I personally have found that (without the complicated setups you mention) the 80% done project that gets one shot is in reality more like 50% done because it is built on an unstable foundation, and that final 20% involves a lot of complicated reworking of the code. There's still plenty of value but I think it is less than proponents would want you to believe.

Anecdotally, I have found that even if you type out paragraph after paragraph describing everything you need the agent to take care of, it eventually feels like you could have written a lot of the code yourself with the help of a good IDE by the time you can finally send your prompt off.

- [0] https://en.wikipedia.org/wiki/Pareto_principle

morkalork 16 minutes ago | parent | prev [-]

Just for getting a frame of reference, how many people were involved over how much time building a workflow with hundreds of agents?

_boffin_ 41 minutes ago | parent | prev | next [-]

…it really feels like they’re attempting to reinvent a project tracker and starting off from scratch in thinking about it.

It feels like they’re a few versions behind what I’m doing, which is… odd.

Self-hosting a plane.io instance. Added a plane MCP tool to my codex. Added workflow instructions into Agents.md which cover standards, documentation, related work, labels, branch names, adding of comments before plan, after plan, at varying steps of implementation, summary before moving ticket to done. Creating new tickers and being able to relate to current or others, etc…

It ain’t that hard. Just do inception (high to mid level details) create epics and tasks. Add personas, details, notes, acceptance criteria and more. Can add comments yourself to update. Whatever.

Slice tickets thin and then go wild. Add tickets as your working though things. Make modifications.

Why so difficult?

CurleighBraces an hour ago | parent | prev | next [-]

I wonder how good these agents would be using something like cucumber and behaviour driven development tools?

dangoodmanUT 2 hours ago | parent | prev | next [-]

> … the model is less likely to inappropriately change or overwrite JSON files compared to Markdown files.

Very interesting.

slurrpurr an hour ago | parent | prev [-]

BDSM for LLMs