| ▲ | amluto 3 hours ago | |||||||||||||||||||||||||
I'm getting tired of these vibe-designed security things. I skimmed the "design". What is sandboxed from what? What is the threat model? What does it protect against, if anything? What does it fail to protect against? How does data get into a sandbox? How does it get out? It kind of sounds like the LLM built a large system that doesn't necessarily achieve any actual value. | ||||||||||||||||||||||||||
| ▲ | itissid an hour ago | parent | next [-] | |||||||||||||||||||||||||
I think a few things explain these kinds of projects 1. There are a lot of Agentic Data Plane startups for knowledge workers(not really for coders[1] but for CFOs, Analysts etc) going up. e.g https://www.redpanda.com/ For people to ask "Hey give me a breakdown of last year's sales target by region, type and compare 2026 to 2025 for Q1". Now this can be done entirely on intranet and only on certain permissioned data servers — by agents or humans — but as someone pointed out the intranet can also be a dangerous place. So I guess this is about protecting DB tables and Jiras and documentation you are not allowed to see.?? 2. People who have skills — like the one OP has with wasm (I guess?) — are building random infra projects for enabling this. 3. All the coding people are getting weirded out by its security model because it is ofc not built for them. [1] As I have commented elsewhere on this thread the moment a coder does webfetch + codeexec its game over from security perspective. Prove me wrong on that please. | ||||||||||||||||||||||||||
| ▲ | amelius 3 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
Yes, I'm also tired of this black-box-for-everything approach. It may work for some cases, you may cherry pick some examples, but at the end of the day it is just stupid, and you are just kicking the can down the road and faking a solution. I'm hoping to see fewer of these posts. Until there is actual provable merit. | ||||||||||||||||||||||||||
| ▲ | dawg91 2 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
I mean it is described somewhat succinctly no? Potentially untrusted tools are isolated from the rest of the system - there were recently some cases of skills for openclaw being used as vectors for malware. This minimizes the adverse effect of potential malicious skills. Also protects from your agent to leaking your secrets left and right - because it has no access to them. Secrets are only supplied when payloads are leaving the host - i.e. the AI never sees your keys. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | stcredzero 2 hours ago | parent | prev [-] | |||||||||||||||||||||||||
We have a different security model. SEKS — Secure Environment for Key Services We built a broker for the keys/secrets. We have a fork of nushell called seksh, which takes stand-ins for the actual auth, but which only reifies them inside the AST of the shell. This makes the keys inaccessible for the agent. In the end, the agent won't even have their Anthropic/OpenAI keys! The broker also acts as a proxy, and injects secrets or even does asymmetric key signing on behalf of the proxied agent. My agents are already running on our fork of OpenClaw, doing the work. They deprecated their Doppler ENV vars, and all their work is through the broker! All that said, we might just take a few ideas from IronClaw as well. I put up a Show HN, but no one noticed: https://news.ycombinator.com/item?id=47005607 Website is here: https://seksbot.com/ | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||