Remix.run Logo
epolanski 2 days ago

Look, I like AI coding but we're already way past the need for parallelism.

LLMs write so much code in such a short time that the bottleneck is already the human having to review, correct, rewrite.

Parallel agents working on different parts of the application just compound this problem worse, it's impossible to catch up.

The only far fetched use case I can see is swarming hundreds of solutions against a properly designed test case and spec documents and having an agent selecting the best solutions.

Still, I'm quite convinced humans would be the bottleneck.

rcarr 2 days ago | parent | next [-]

You are the main thread:

https://www.claudelog.com/mechanics/you-are-the-main-thread/

2 days ago | parent [-]
[deleted]
SatvikBeri 2 days ago | parent | prev | next [-]

It really depends on the project. For example, there's a lot of thorny devops debugging where I can just let Claude spin for 30 minutes and it'll solve the problem (or fail) with a relatively short final answer.

The sweet spot for me tends to be running one of these slower projects on a worktree in the background, and one more active coding project.

epolanski 2 days ago | parent [-]

Yeah sure, I mean, there always be problems you can swarm..

OutOfHere 2 days ago | parent | prev [-]

Exactly. With other models that are not Claude, the code generation for an issue takes a minute at most, whereas writing the detailed specification for it as a human takes me days or longer. Parallel code generation is as relevant to me as having a fast car stuck in traffic at a red light.