Remix.run Logo
qazxcvbnmlp 6 hours ago

My mental model is that ai coding tools are machines that can take a set of constraints and turn them into a piece of code. The better you get at having it give its self those constraints accurately, the higher level task you can focus on.

Eg compiler errors, unit tests, mcp, etc.

Ive heard of these; but havent tried them yet.

https://github.com/hmans/beans

https://github.com/steveyegge/gastown

Right now i spent a lot of “back pressure” on fitting the scope of the task into something that will fit in one context window (ie the useful computation, not the raw token count). I suspect we will see a large breakthrough when someone finally figures out a good system for having the llm do this.

AnonyX387 3 hours ago | parent [-]

> Right now i spent a lot of “back pressure” on fitting the scope of the task into something that will fit in one context window (ie the useful computation, not the raw token count). I suspect we will see a large breakthrough when someone finally figures out a good system for having the llm do this.

I've found https://github.com/obra/superpowers very helpful for breaking the work up into logical chunks a subagent can handle.

nonethewiser 3 hours ago | parent [-]

How would you compare it to Claude Code in planning mode?

AnonyX387 an hour ago | parent [-]

I've only used Claude's planning mode when I just started using Claude Code, so it may be me using it wrong at the time, but the superpowers are way more helpful for picking up on you wanting to build/modify something and helping you brainstorm interactively to a solid spec, suggesting multiple options when applicable. This results in a design and implementation doc and then it can coordinate subagents to implement the different features, followed by spec review and code review. Really impressed with it, I use it for anything non-trivial.