Remix.run Logo
vorticalbox 5 hours ago

could we not instruct the LLM to run build commands in a sub agents which could then just return a summary of what happened?

this avoids having to update everything to support LLM=true and keep your current context window free of noise.

vidarh 5 hours ago | parent | next [-]

Make (or whatever) targets that direct output to file and returns a subset have helped me quite a bit. Then wrap that in an agent that also knows how and when to return cached and filtered data from the output vs. rerunning. Fewer tokens spent reading output details that usually won't matter, coupled with less context pollution in the main agent from figuring out what to do.

canto 5 hours ago | parent | prev | next [-]

q() { local output output=$("$@" 2>&1) local ec=$? echo "$output" | tail -5 return $ec }

There :)

dizzy3gg 5 hours ago | parent | prev [-]

That would achieve 1 of the 3 wins.

wongarsu 5 hours ago | parent | next [-]

If you use a smaller model for the sub agent you get all three

Of course you can combine both approaches for even greater gains. But Claude Code and like five alternatives gaining an efficient tool-calling paradigm where console output is interpreted by Haiku instead of Opus seems like a much quicker win than adding an LLM env flag to every cli tool under the sun

noname120 5 hours ago | parent | prev [-]

Probably the main one, people mostly complain about context window management rather than token usage

Bishonen88 5 hours ago | parent [-]

Dunno about that. Having used the $20 claude plan, I ran out of tokens within 30 minutes if running 3-4 agents at the same time. Often times, all 3-4 will run a build command at the end to confirm that the changes are successful. Thus the loss of tokens quickly gets out of hand.

Edit: Just remembered that sometimes, I see claude running the build step in two terminals, side-by-side at nearly the same time :D