▲ | wj a day ago | |
Agreed on the sandboxing. I think it is a nut that the LLM providers are going to need to crack in order to allow companies to operate AI safely without keeping users in the loop. Otherwise automated workflow are going to need to be orchestrated elsewhere (and be more limited in what steps they lean on LLMs to solve) in order treat the LLM output as just data. Where I landed was a bit of a Jupyter notebook concept for a conversation where a user/API can request that certain prompts (cells) be trusted (elevated permissions for tools and file system access) while you do the bulk of the analysis work in the untrusted prompts. (if anyone is interested in the germ of the idea: https://zero2data.substack.com/p/trusted-prompts) | ||
▲ | brianjking a day ago | parent [-] | |
Thanks for the link. |