| ▲ | vghaisas 10 hours ago | |||||||
This is very cool, but I'm having some trouble understanding the use cases. Is this mostly just for codemode where the MCP calls instead go through a Monty function call? Is it to do some quick maths or pre/post-processing to answer queries? Or maybe to implement CaMeL? It feels like the power of terminal agents is partly because they can access the network/filesystem, and so sandboxed containers are a natural extension? | ||||||||
| ▲ | 16bitvoid 9 hours ago | parent [-] | |||||||
It's right there in the README. > Monty avoids the cost, latency, complexity and general faff of using full container based sandbox for running LLM generated code. > Instead, it let's you safely run Python code written by an LLM embedded in your agent, with startup times measured in single digit microseconds not hundreds of milliseconds. | ||||||||
| ||||||||