| ▲ | redwood 2 hours ago | |
How does the scale? For example if I were to have hundreds or thousands of concurrent agents running with some parts of their data pulled out of shared state and other parts custom to that particular agent run and I wanted all of this to be preserved for future collective or individual agent use later, is this a reasonable primitive for that problem space? Or is this more for a situation what you have one or a small number of productivity assistance agents that need a sandbox but low data mutation throughput and low amount of concurrent access across different agents? | ||
| ▲ | ozkatz an hour ago | parent [-] | |
it should absolutely scale to that. The filesystem is backed by lakeFS, where every sandbox automatically branches out, and mounts that branch. so you get isolation from lakeFS and the scale of an underlying object store (S3, in Tilde). | ||