| ▲ | ottah 4 hours ago |
| I absolutely cannot trust Claude code to independently work on large tasks. Maybe other people work on software that's not significantly complex, but for me to maintain code quality I need to guide more of the design process. Teams of agents just sounds like adding a lot more review and refactoring that can just be avoided by going slower and thinking carefully about the problem. |
|
| ▲ | nickstinemates 2 hours ago | parent | next [-] |
| You write a generic architecture document on how you want your code base to be organized, when to use pattern x vs pattern y, examples of what that looks like in your code base, and you encode this as a skill. Then, in your prompt you tell it the task you want, then you say, supervise the implementation with a sub agent that follows the architecture skill. Evaluate any proposed changes. There are people who maximize this, and this is how you get things like teams. You make agents for planning, design, qa, product, engineering, review, release management, etc. and you get them to operate and coordinate to produce an outcome. That's what this is supposed to be, encoded as a feature instead of a best practice. |
| |
| ▲ | satellite2 2 hours ago | parent | next [-] | | Aren't you just moving the problem a little bit further? If you can't trust it will implement carefully specified features, why would you believe it would properly review those? | | |
| ▲ | frde_me an hour ago | parent [-] | | It's hard to explain, but I've found LLMs to be significantly better in the "review" stage than the implementation stage. So the LLM will do something and not catch at all that it did it badly. But the same LLM asked to review against the same starting requirement will catch the problem almost always The missing thing in these tools is that automatic feedback loop between the two LLMs: one in review mode, one in implementation mode. | | |
| ▲ | resonious an hour ago | parent [-] | | I've noticed this too and am wondering why this hasn't been baked into the popular agents yet. Or maybe it has and it just hasn't panned out? | | |
| ▲ | bashtoni an hour ago | parent [-] | | Anecdotaly I think this is in Claude Code. It's pretty frequent to see it implement something, then declare it "forgot" a requirement and go back and alter or add to the implementation. |
|
|
| |
| ▲ | tclancy 2 hours ago | parent | prev [-] | | How does this not use up tokens incredibly fast though? I have a Pro subscription and bang up against the limits pretty regularly. | | |
| ▲ | doctoboggan 2 hours ago | parent | next [-] | | It _does_ use up tokens incredibly fast, which is probably why Anthropic is developing this feature. This is mostly for corporations using the API, not individuals on a plan. | | |
| ▲ | digdugdirk 2 hours ago | parent [-] | | I'd love to see a breakdown of the token consumption of inaccurate/errored/unused task branches for claude code and codex. It seems like a great revenue source for the model providers. | | |
| ▲ | shafyy 2 hours ago | parent [-] | | Yeah, that's what I was thinking. They do have an incentive to not get everything right on the first try, as long as they don't over do it... I also feel like that they try to get more token usage by asking unnecesary follow up questions that the user may say yes to etc. |
|
| |
| ▲ | andyferris 2 hours ago | parent | prev [-] | | It does use tokens faster, yes. |
|
|
|
| ▲ | aqme28 3 hours ago | parent | prev | next [-] |
| I agree, but I've found that making an "adversarial" model within claude helps with the quality a lot. One agent makes the change, the other picks holes in it, and cycle. In the end, I'm left with less to review. This sounds more like an automation of that idea than just N-times the work. |
| |
| ▲ | Keyframe 2 hours ago | parent | next [-] | | Glad I'm not the only one. I do the same, but I tend to have gemini be the one that critiques. | |
| ▲ | diego898 2 hours ago | parent | prev [-] | | Do you do this manually? Or some abstraction above that? skills, some light orchestration, etc? | | |
| ▲ | aqme28 2 hours ago | parent [-] | | I just tell it to do so, but you could even add that as a requirement to CLAUDE.md |
|
|
|
| ▲ | turtlebits 3 hours ago | parent | prev | next [-] |
| Humans can't handle large tasks either, which is why you break them into manageable chunks. Just ask claude to write a plan and review/edit it yourself. Add success criteria/tests for better results. |
|
| ▲ | stpedgwdgfhgdd 3 hours ago | parent | prev | next [-] |
| Exactly, one out of four or three prompts require tuning, nudging or just stopping it. However it takes seniority to see where it goes astray. I suspect that lots of folks dont even notice that CC is off. It works, it passes the tests, so it is good. |
|
| ▲ | nprz 4 hours ago | parent | prev | next [-] |
| There is research[0] currently being done on how to divide tasks and combine the answers to LLMs. This approach allows LLMs reach outcomes (solving a problem that requires 1 million steps) which would be impossible otherwise. [0]https://arxiv.org/abs/2511.09030 |
| |
| ▲ | woah 3 hours ago | parent | next [-] | | All they did was prompt an LLM over and over again to execute one iteration of a towers of hanoi algorithm. Literally just using it as a glorified scripting language: ``` Rules: - Only one disk can be moved at a time. - Only the top disk from any stack can be moved. - A larger disk may not be placed on top of a smaller disk. For all moves, follow the standard Tower of Hanoi procedure:
If the previous move did not move disk 1, move disk 1 clockwise one peg (0 -> 1 -> 2 -> 0). If the previous move did move disk 1, make the only legal move that does not involve moving
disk1. Use these clear steps to find the next move given the previous move and current state. Previous move: {previous_move}
Current State: {current_state}
Based on the previous move and current state, find the single next move that follows the
procedure and the resulting next state. ``` This is buried down in the appendix while the main paper is full of agentic swarms this and millions of agents that and plenty of fancy math symbols and graphs. Maybe there is more to it, but the fact that they decided to publish with such a trivial task which could be much more easily accomplished by having an llm write a simple python script is concerning. | | | |
| ▲ | ottah 4 hours ago | parent | prev [-] | | No offense to the academic profession, but they're not a good source of advice for best practices in commercial software development. They don't have the experience or the knowledge sufficient to understand my workplace and tasks. Their skill set and job is orthogonal to the corporate world. | | |
| ▲ | nprz 4 hours ago | parent [-] | | Yes, the problem solved in the paper (Tower of Hanoi) is far more easily defined than 99% of actual problems you would find in commercial software development. Still proof of "theoretically possible" and seems like an interesting area of research. |
|
|
|
| ▲ | findjashua 2 hours ago | parent | prev | next [-] |
| you need a reviewer agent for every step of the process - review the plan generated by the planner, the update made by the task worker subagent, and a final reviewer once all tasks are done. this does eat up tokens _very_ quickly though :( |
|
| ▲ | BonoboIO 4 hours ago | parent | prev [-] |
| You definitely have to create some sort of PLAN.md and PROGRESS.md via a command and an implement command that delegates work. That is the only way that I can get bigger things done no matter how „good“ their task feature is. You run out of context so quickly and if you don’t have some kind of persistent guidance things go south |
| |
| ▲ | ottah 4 hours ago | parent | next [-] | | It's not sufficient, especially if I am not learning about the problem by being part of the implementation process. The models are still very weak reasoners, writing code faster doesn't accelerate my understanding of the code the model wrote. Even with clear specs I am constantly fighting with it duplicating methods, writing ineffective tests, or implementing unnecessarily complex solutions. AI just isn't a better engineer than me, and that makes it a weak development partner. | | |
| ▲ | vonneumannstan 2 hours ago | parent [-] | | >AI just isn't a better engineer than me, and that makes it a weak development partner. This would also be true of Junior Engineers. Do you find them impossible to work with as well? |
| |
| ▲ | koakuma-chan 4 hours ago | parent | prev [-] | | I tried doing that and it didn't work. It still adds "fallbacks" that just hide errors or the fact that there is no actual implementation and "In a real app, we would do X, just return null for now" |
|