| ▲ | 16bitvoid 9 hours ago | |
It's right there in the README. > Monty avoids the cost, latency, complexity and general faff of using full container based sandbox for running LLM generated code. > Instead, it let's you safely run Python code written by an LLM embedded in your agent, with startup times measured in single digit microseconds not hundreds of milliseconds. | ||
| ▲ | vghaisas 4 hours ago | parent [-] | |
Oh I did read the README, but still have the question: while it does save on cost, latency and complexity, the tradeoff is that the agents can't run whatever they want in a sandbox, which would make them less capable too. | ||