| ▲ | foota 8 hours ago | |
Not really, because the LLM loop doesn't have the ability to get updates from the agent live. It would have to somehow be integrated all the way down the stack. | ||
| ▲ | jameshart 8 hours ago | parent [-] | |
LLMs can have whatever abilities we build for them. The fact we currently start their context out with a static prompt which we keep feeding in on every iteration of the token prediction loop is a choice. We don’t have to keep doing that if there are other options available. | ||