Remix.run Logo
amortka 12 hours ago

The real bottleneck isn’t human review per se, it’s unstructured review. Parallel agents only make sense if each worktree has a tight contract: scoped task, invariant tests, and a diff small enough to audit quickly. Without that, you’re just converting “typing time” into “reading time,” which is usually worse. Tools like this shine when paired with discipline: one hypothesis per agent, automated checks gate merges, and humans arbitrate intent—not correctness.

hoakiet98 9 hours ago | parent [-]

Agreed. I generally see much better results for smaller, well-scoped tasks. Since there's very little friction to spinning up a worktree (~2s), I open one for any small tasks, something I couldn't do while working on a single branch.

senordevnyc 6 hours ago | parent [-]

I currently prefer Cursor to CC, does Superset play well with Cursor too? Is this a replacement for their work tree feature?

I haven’t setup worktrees yet, so if I have a quick task while working in main, I currently just spin up another agent in plan mode, and then execute them serially. In parallel would be really nice though. I often have 5-10 agents with completed plans, and I’m just slogging through executing them one at a time.