| ▲ | dataviz1000 3 hours ago | |
LLM models and the agents that use them are probabilistic, not deterministic. They accomplish something a percentage of the time, never every time. That means the longer an agent runs on a task, the more likely it will fail the task. Running agents like this will always fail and burn a ton of token cash in the process. One thing that LLM agents are good at is writing their own instructions. The trick is to limit the time and thinking steps in a thinking model then evaluate, update, and run again. A good metaphor is that agents trip. Don't let them run long enough to trip. It is better to let them run twice for 5 minutes than once for 10 minutes. Give it a few weeks and self-referencing agents are going to be at the top of everybody's twitter feed. | ||
| ▲ | iterateoften 3 minutes ago | parent [-] | |
It’s also that agents and ML reach local maximima unless external feedback is given. So your wiki will reach a state and get stuck there. | ||