| ▲ | sothatsit 7 hours ago |
| They haven’t released this feature, so maybe they know the models aren’t good enough yet. I also think it’s interesting to see Anthropic continue to experiment at the edge of what models are capable of, and having it in the harness will probably let them fine-tune for it. It may not work today, but it might work at the end of 2026. |
|
| ▲ | daxfohl 6 hours ago | parent [-] |
| True, though even then I kind of wonder what's the point. Once they build an AI that's as good as a human coder but 1000x faster, parallelization no longer buys you anything. Writing and deploying the code is no longer the bottleneck, so the extra coordination required for parallelism seems like extra cost and risk with no practical benefit. |
| |
| ▲ | sothatsit 5 hours ago | parent | next [-] | | Each agent having their own fresh context window for each task is probably alone a good way to improve quality. And then I can imagine agents reviewing each others work might work to improve quality as well, like how GPT-5 Pro improves upon GPT-5 Thinking. | |
| ▲ | nojs 5 hours ago | parent | prev [-] | | It’s more about context management, not speed | | |
| ▲ | xyzsparetimexyz 4 hours ago | parent [-] | | Do you really need a full dev team ensemble to manage context? Surely subagents are enough. | | |
| ▲ | TeMPOraL 3 hours ago | parent [-] | | Potato, potatoh. People get confused by all this agent talk and forget that, at the end of the day, LLM calls are effectively stateless. It's all abstractions around how to manage the context you send with each request. |
|
|
|