Remix.run Logo
tadfisher 9 hours ago

Doing a tool call for autocomplete is not going to make coding agents faster.

I do think there is some merit in a tool that dumps all namespaces and reachable symbols so the agent can do its own autocomplete without a round-trip.

jameshart 5 hours ago | parent [-]

Doesn’t need to be a tool call.

As a human coder you don’t summon intellisense. It’s just popped up into your visual field as extra input - contextual cues.

You could force intellisense state into the context vector the LLM receives.

foota an hour ago | parent [-]

Not really, because the LLM loop doesn't have the ability to get updates from the agent live. It would have to somehow be integrated all the way down the stack.

jameshart an hour ago | parent [-]

LLMs can have whatever abilities we build for them. The fact we currently start their context out with a static prompt which we keep feeding in on every iteration of the token prediction loop is a choice. We don’t have to keep doing that if there are other options available.