| ▲ | ggrab 3 hours ago |
| IMO the security pitchforking on OpenClaw is just so overdone. People without consideration for the implications will inevitably get burned, as we saw with the reddit posts "Agentic Coding tool X wiped my hard drive and apologized profusely".
I work at a FAANG and every time you try something innovative the "policy people" will climb out of their holes and put random roadblocks in your way, not for the sake of actual security (that would be fine but would require actual engagement) but just to feel important, it reminds me of that. |
|
| ▲ | throwaway_z0om 2 hours ago | parent | next [-] |
| > the "policy people" will climb out of their holes I am one of those people and I work at a FANG. And while I know it seems annoying, these teams are overwhelmed with not only innovators but lawyers asking so many variations of the same question it's pretty hard to get back to the innovators with a thumbs up or guidance. Also there is a real threat here. The "wiped my hard drive" story is annoying but it's a toy problem. An agent with database access exfiltrating customer PII to a model endpoint is a horrific outcome for impacted customers and everyone in the blast radius. That's the kind of thing keeping us up at night, not blocking people for fun. I'm actively trying to find a way we can unblock innovators to move quickly at scale, but it's a bit of a slow down to go fast moment. The goal isn't roadblocks, it's guardrails that let you move without the policy team being a bottleneck on every request. |
| |
| ▲ | madeofpalk an hour ago | parent | next [-] | | I know it’s what the security folk think about, exfiltrating to a model endpoint is the least of my concerns. I work on commercial OSS. My fear is that it’s exfiltrated to public issues or code. It helpfully commits secrets or other BS like that. And that’s even ignoring prompt injection attacks from the public. | | |
| ▲ | throwaway_z0om an hour ago | parent [-] | | In the end if the data goes somewhere public, it'll be consumed and in today's threat model another GenAI tool is going to exploit faster than any human will. |
| |
| ▲ | chrisjj an hour ago | parent | prev | next [-] | | > I'm actively trying to find a way we can unblock innovators to move quickly at scale So did "Move fast and break things" not work out? /i | |
| ▲ | mikkupikku 2 hours ago | parent | prev | next [-] | | I am sure there are many good corporate security policy people doing important work. But then there are people like this; I get handed an application developed by my company for use by partner companies. It's a java application, shipped as a jar, nothing special. It gets signed by our company, but anybody with the wherewithal can pull the jar apart and mod the application however they wish. One of the partner companies has already done so, extensively, and come back to show us their work. Management at my company is impressed and asks me to add official plugin support to the application. Can you guess where this is going? I add the plugin support,the application will now load custom jars that implement the plugin interface I had discussed with devs from that company that did the modding. They think it's great, management thinks its great, everything works and everybody is happy. At the last minute some security policy wonk throws on the brakes. Will this load any plugin jar? Yes. Not good! It needs to only load plugins approved by the company. Why? Because! Never mind that the whole damn application can be unofficially nodded with ease. I ask him how he wants that done, he says only load plugins signed by the company. Retarded, but fine. I do so. He approves it, then the partner company engineer who did the modding chimes in that he's just going to mod the signature check out, because he doesn't want to have to deal with this shit. Security asshat from my company has a melt down and long story short the entire plugin feature, which was already complete, gets scrapped and the partner company just keeps modding the application as before. Months of my life down the drain. Thanks guys, great job protecting... something. | | |
| ▲ | embedding-shape 2 hours ago | parent | next [-] | | So why are these people not involved from the first place? Seems like a huge management/executive failure that the right people who needs to check off the design weren't involved until after developers implemented the feature. You seem to blame the person who is trying to save the company from security issues, rather than placing the blame on your boss that made you do work that would never gotten approved in the first place if they just checked with the right person first? | | |
| ▲ | mikkupikku 2 hours ago | parent | next [-] | | Because they don't respond to their emails until months after they were nominally brought into the loop. They sit back jerking their dicks all day, voicing no complaints and giving no feedback until the thing is actually done. Yes, management was ultimately at fault. They're at fault for not tard wrangling the security guys into doing their jobs up front. They're also at fault for not tard wrangling the security guys when they object to an inherently modifiable application being modified. | | |
| ▲ | embedding-shape 2 hours ago | parent | next [-] | | Again sounds like a management failure. Why aren't you boss talking with their boss and asking what the fuck is going on, and putting the development on hold until it's been agreed on? Again your boss is the one who is wasting your time, they are the one responsible for that what you spend your time on is actually useful and valuable, which they clearly messed up in that case. | | |
| ▲ | mikkupikku 2 hours ago | parent [-] | | As I already said, management ultimately is the root of the blame. But what you don't seem to get is that at least some of their blame is from hiring dumbasses into that security review role. Why did the security team initially give the okay to checking signatures on plugin jars? They're supposed to be security experts, what kind of security expert doesn't know that a signature check like that could be modded out? I knew it when I implemented it, and the modder at the partner corp obviously knew it but lacked the tact to stay quiet about it. Management didn't realize it, but they aren't technical. So why didn't security realize it until it was brought to their attention? Because they were retarded. By the way, this application is still publicly downloadable, still easily modded, and hasn't been updated in almost 10 years now. Security review is fine with that, apparently. They only get bent out of shape when somebody actually tries to make something more useful, not when old nominally vulnerable software is left to rot in public. They're not protecting the company from a damn thing. | | |
| ▲ | presentation 16 minutes ago | parent | next [-] | | Well if it requires tampering with the software to do the insecure thing, then it’s presumably your company has a contract in place saying that if they get hacked it’s on them. That doesn’t strike me as just being retarded security theater. | |
| ▲ | cindyllm 6 minutes ago | parent | prev [-] | | [dead] |
|
| |
| ▲ | moron4hire 39 minutes ago | parent | prev [-] | | Yeah, I've had them complain to the President of the company that I didn't involve them sooner, with the pres having been in the room when I made the first request 12 months ago, the second 9 months ago, the third 6 months ago, etc. They insist we can't let client data [0] "into the cloud" despite the fact that the client's data is already in "the cloud" and all I want to do is stick it back into the same "cloud", just a different tenant. Despite the fact that the vendor has certified their environment to be suitable for all but the most absolutely sensitive data (for which if you really insist, you can call then for pricing), no, we can't accept that and have to do our own audit. How long is that going to take? "2 years and $2 million". There is no fucking way. No fucking way that is the real path. There is no way our competitors did that. There is no way any of the startups we're seeing in this market did that. Or! Or! If it's true, why the fuck didn't you start it back two years ago when we installed this was necessary the first time? Hell, I'd be happy if you had started 18 months ago, or a year ago. Anything! You were told several times, but the president of our company, to make this happen, and it still hasn't happened?!?! They say we can't just trust the service provider for a certain service X, despite the fact that literally all of our infrastructure is provided by same service provider, so if they were fundamentally untrustworthy then we are already completely fucked. I have a project to build a new analytics platform thing. Trying to evaluate some existing solutions. Oh, none of them are approved to be installed on our machines. How do we get that approval? You can't, open source sideways is fundamentally untrustworthy. Which must be why it's at the core of literally every piece of software we use, right? Oh, but I can do it in our new cloud environment! The one that was supposedly provided by an untrustworthy vendor! I have a bought-and-paid-for laptop with fairly decent specs and they seriously expect me and my team to remote desktop into a VM to do our work, paying exorbitant monthly fees for equivalent hardware to what we will now have sitting basically idle on our desks! And yes, it will be "my" money. I have a project budget and I didn't expect to have to increase it 80% just because "security reasons". Oh yeah, I have to ask them to install the software and "burn it into the VM image" for me. What the fuck does that even mean!? You told me 6 months ago this system was going to be self-service! We are entering our third year of new leadership in our IT department, yet this new leadership never guts the ranks of the middle managers who were the sticks in the mud. Two years ago we hired a new CIO. Last year we got a deputy CIO to assist him. This year, it's yet another new CIO, but the previous two guys aren't gone, they are staying in exactly their current duties, their titles have just changed and they report to the new guy. What. The. Fuck. [0] To be clear, this is data the client has contracted us to do analysis on. It is also nothing to do with people's private data. It's very similar to corporate operations data. It's 100% owned by the client, they've asked us to do a job with it and we can't do that job. |
| |
| ▲ | jppittma an hour ago | parent | prev [-] | | The bikeshedding is coming from in the room. The point is that the feature didn't cause any regression in capability. And who tf wants a plugin system with only support for first party plugins? | | |
| ▲ | Kye 13 minutes ago | parent [-] | | Someone with legal responsibility for the data those plugins touch. |
|
| |
| ▲ | chrisjj an hour ago | parent | prev [-] | | > he's just going to mod the signature check out, because he doesn't want to have to deal with this shit Fine. The compliance catastrophe will be his company's not yours'. |
| |
| ▲ | Myrmornis 2 hours ago | parent | prev [-] | | The main problem with many IT and security people at many tech companies is that they communicate in a way that betrays their belief that they are superior to their colleagues. "unlock innovators" is a very mild example; perhaps you shouldn't be a jailor in your metaphors? | | |
| ▲ | Goofy_Coyote an hour ago | parent | next [-] | | A bit crude, maybe a bit hurt and angry, but has some truth in it. A few things help a lot (for BOTH sides - which is weird to say as the two sides should be US vs Threat Actors, but anyway): 1. Detach your identity from your ideas or work. You're not your work. An idea is just a passerby thought that you grabbed out of thin air, you can let it go the same way you grabbed it. 2. Always look for opportunities to create a dialogue. Learn from anyone and anything. Elevate everyone around you. 3. Instead of constantly looking for reasons why you're right, go with "why am I wrong?", It breaks tunnel vision faster than anything else. Asking questions isn't an attack. Criticizing a design or implementation isn't criticizing you. Thank you, One of the "security people". | |
| ▲ | criley2 2 hours ago | parent | prev [-] | | I find it interesting that you latched on their jailor metaphor, but had nothing to say about their core goal: protecting my privacy. I'm okay with the people in charge of building on top of my private information being jailed by very strict, mean sounding, actually-higher-than-you people whose only goal is protecting my information. Quite frankly, if you changed any word of that, they'd probably be impotent and my data would be toast. |
|
|
|
| ▲ | latexr 2 hours ago | parent | prev | next [-] |
| > People without consideration for the implications will inevitably get burned They will also burn other people, which is a big problem you can’t simply ignore. https://theshamblog.com/an-ai-agent-published-a-hit-piece-on... But even if they only burned themselves, you’re talking as if that isn’t a problem. We shouldn’t be handing explosives to random people on the street because “they’ll only blow their own hands”. |
|
| ▲ | H8crilA 3 hours ago | parent | prev | next [-] |
| This may be a good place to exchange some security ideas. I've configured my OpenClaw in a Proxmox VM, firewalled it off of my home network so that it can only talk to the open Internet, and don't store any credentials that aren't necessary. Pretty much only the needed API keys and Signal linked device credentials. The models that can run locally do run locally, for example Whisper for voice messages or embeddings models for semantic search. |
| |
| ▲ | stavros 33 minutes ago | parent | next [-] | | I was worried about the security risk of running it on my infrastructure, so I made my own: https://github.com/skorokithakis/stavrobot At least I can run this whenever, and it's all entirely sandboxed, with an architecture that still means I get the features. I even have some security tradeoffs like "you can ask the bot to configure plugin secrets for convenience, or you can do it yourself so it can never see them". You're not going to be able to prevent the bot from exfiltrating stuff, but at least you can make sure it can't mess with its permissions and give itself more privileges. | |
| ▲ | embedding-shape 3 hours ago | parent | prev | next [-] | | I think the security worries are less about the particular sandbox or where it runs, and more about that if you give it access to your Telegram account, it can exfiltrate data and cause other issues. But if you never hand it access to anything, obviously it won't be able to do any damage, unless you instruct it to. | | |
| ▲ | kzahel 2 hours ago | parent [-] | | You wouldn't typically give it access to your own telegram account. You use the telegram bot API to make a bot and the claw gateway only listens to messages from your own account | | |
| ▲ | embedding-shape 2 hours ago | parent [-] | | That's a very different approach, and a bot user is very different from a regular Telegram account, it won't be nearly as "useful", at least in the way I thought openclaw was supposed to work. For example, a bot account cannot initiate conversations, so everyone would need to first message the bot, doesn't that defeat the entire purpose of giving openclaw access to it then? I thought they were supposed to be your assistant and do outbound stuff too, not just react to incoming events? | | |
| ▲ | arcwhite an hour ago | parent [-] | | Once a conversation with a user is established, telegram bots can bleep away at you. Mine pings me whenever it puts a PR up, and when it's done responding to code reviews etc. | | |
| ▲ | embedding-shape an hour ago | parent [-] | | Right, but again that's not actually outbound at all, what you're describing is only inbound. Again, I thought the whole point was that the agent could start acting autonomously to some degree, not allow outbound kind of defeats the entire purpose, doesn't it? |
|
|
|
| |
| ▲ | CuriouslyC 30 minutes ago | parent | prev | next [-] | | If you're really into optimizing: You don't need to store any credentials at all (aside from your provider key, unless you want to mod pi). Your claw also shouldn't be able to talk to the open internet, it should be on a VPN with a filtering proxy and a webhook relay. | |
| ▲ | dakolli 3 hours ago | parent | prev [-] | | Genuinely curious, what are you doing with OpenClaw that genuinely improves your life? The security concerns are valid, I can get anyone running one of these agents on their email inbox to dump a bunch of privileged information with a single email.. |
|
|
| ▲ | pvtmert 2 hours ago | parent | prev | next [-] |
| I am also ex-FAANG (recently departed), while I partially agree the "policy-people" pop-up fairly often, my experience is more on the inadequate checks side. Though with the recent layoffs and stuff, the security in Amazon was getting better. Even the best-practices for IAM policies that was the norm in 2018, is just getting enforced by 2025. Since I had a background of infosec, it always confused me how normal it was to give/grant overly permissive policies to basically anything. Even opening ports to worldwide (0.0.0.0/0) had just been a significant issue in 2024, still, you can easily get away with by the time the scanner finds your host/policy/configuration... Although nearly all AWS accounts managed by Conduit (internal AWS Account Creation and Management Service), the "magic-team" had many "account-containers" to make all these child/service accounts joining into a parent "organization-account". By the time I left, the "organization-account" had no restrictive policies set, it is up to the developers to secure their resources. (like S3 buckets & their policies) So, I don't think the policy folks are overall wrong. In the best case scenario, they do not need to exist in the first place! As the enforcement should be done to ensure security. But that always has an exception somewhere in someone's workflow. |
| |
| ▲ | throwaway_z0om an hour ago | parent [-] | | Defense in depth is important, while there is a front door of approvals, you need stuff checking the back door to see if someone left the keys under the mat. |
|
|
| ▲ | whyoh 2 hours ago | parent | prev | next [-] |
| >IMO the security pitchforking on OpenClaw is just so overdone. Isn't the whole selling point of OpenClaw that you give it valuable (personal) data to work on, which would typically also be processed by 3rd party LLMs? The security and privacy implications are massive. The only way to use it "safely" is by not giving it much of value. |
| |
| ▲ | muyuu 6 minutes ago | parent [-] | | There's the selling point of using it as a relatively untrustworthy agent that has access to all the resources on a particular computer and limited access to online tools to its name. Essentially like Claude Code or OpenCode but with its own computer, which means it doesn't constantly hit roadblocks when attempting to uselegacy interfaces meant for humans. Which is... most things to do with interfaces, of course. |
|
|
| ▲ | beaker52 2 hours ago | parent | prev | next [-] |
| The difference is that _you_ wiped your own hard drive. Even if prompt injection arrives by a scraped webpage, you still pressed the button. All these claws throw caution to the wind in enabling the LLM to be triggered by text coming from external sources, which is another step in wrecklessness. |
|
| ▲ | sa-code 3 hours ago | parent | prev | next [-] |
| > every time you try something innovative the "policy people" will climb out of their holes and put random roadblocks in your way This is so relatable. I remember trying to set up an LLM gateway back in 2023. There were at least 3 different teams that blocked our rollout for months until they worked through their backlog. "We're blocking you, but you’ll have to chase and nag us for us to even consider unblocking you" At the end of all that waiting, nothing changed. Each of those teams wrote a document saying they had a look and were presumably just happy to be involved somehow? |
| |
| ▲ | miki123211 2 hours ago | parent | next [-] | | I think you should read "the Phoenix project." One of the lessons in that book is that the main reasons things in IT are slow isn't because tickets take a long time to complete, but that they spend a long time waiting in a queue. The busier a resource is, the longer the queue gets, eventually leading to ~2% of the ticket's time spent with somebody doing actual work on it. The rest is just the ticket waiting for somebody to get through the backlog, do their part and then push the rest into somebody else's backlog, which is just as long. I'm surprised FAANGs don't have that part figured out yet. | |
| ▲ | embedding-shape 3 hours ago | parent | prev | next [-] | | To be fair, the alternative is them having to maintain and continuously check N services that various devs deployed because it felt appropriate in the moment, and then there is a 50/50 chance the service will just sit there unused and introduce new vulnerability vectors. I do know the feeling you're talking about though, and probably a better balance is somewhere in the middle. Just wanted to add that the solution probably isn't "Let devs deploy their own services without review", just as the solution probably also isn't "Stop devs for 6 months to deploy services they need". | | |
| ▲ | regularfry an hour ago | parent [-] | | The trick is to make the class of pre-approved service types as wide as possible, and make the tools to build them correctly the default. That minimises the number of things that need review in the first place. | | |
| ▲ | throwaway_z0om an hour ago | parent [-] | | Yes providing paved paths that let people build quickly without approvals is really important, while also having inspection to find things that are potential issues. |
|
| |
| ▲ | pvtmert 2 hours ago | parent | prev [-] | | From my experience, it depends on how you frame your "service" to the reviewers. Obviously 2023 was the very early stage of LLMs, where the security aspects were quite murky at best. They (reviewers) probably did not had any runbook or review criteria at that time. If you had advertised this as a "regular service which happens to use LLM for some specific functions" and the "output is rigorously validated and logged", I am pretty sure you would get a green-light. This is because their concern is data-privacy and security. Not because they care or the company actually cares, but because fines of non-compliance are quite high and have greater visibility if things go wrong. |
|
|
| ▲ | jihadjihad 14 minutes ago | parent | prev | next [-] |
| No laws when you’re running Claws. |
|
| ▲ | weinzierl 2 hours ago | parent | prev | next [-] |
| I think there are two different things at work
here that deserve to be separated: 1. The compliance box tickers and bean counters are in the way of innovation and it hurts companies. 2. Claws derive their usefulness mainly from having broad permissions, not only to you local system but also to your accounts via your real identity [1]. Carefulness is very much warranted. [1] People correct me if I'm misguided, but that is how I see it. Run the bot in a sandbox with no data and a bunch of fake accounts and you'll see how useful that is. |
| |
| ▲ | enderforth an hour ago | parent [-] | | It's been my experience that there are 2 types of security people.
1. Are the security people who got into a security because it was one of the only places that let them work with every part of the stack, and exposure to dozens of different domains on the regular, and the idea of spending hours understanding and then figuring out ways around whitelist validations are appealing 2. Those that don't have much technical chops, but can get by with a surface level understanding of several areas and then perform "security shamanism" to intimidate others and pull out lots of jargon. They sound authoritative because information security is a fairly esoteric concept and because you can't argue against security like you can't argue against health and safety, the only response is "so you don't care about security?!" It is my experience that the first are likely to work with you to help figure out how to get your application past the hurdles and challenges you face viewing it as an exciting problem. The second view their job as to "protect the organization" not deliver value. They love playing dressup in security theater and their depth of their understanding doesn't even pose a drowning risk to infants, which they make up for with esoterica, and jargon. They are also unfortunately the one's cooking up "standards" and "security policies" because it allows them to feel like they are doing real work, without the burden of actually knowing what they are doing, and talented people are actually doing something. Here's a good litmus test to distinguish them, ask their opinion on the CISSP. If it's positive they probably don't know what the heck they are talking about. Source: A long career operating in multiple domains, quite a few of which have been in security having interacted with both types (and hoping I fall into the first camp rather than the latter) | | |
| ▲ | Goofy_Coyote an hour ago | parent [-] | | > ask their opinion on the CISSP This made me lol. It's a good test, however, I wouldn't ask it in a public setting lol, you have to ask them in a more private chat - at least for me, I'm not gonna talk bad about a massive org (ISC2) knowing that tons of managers and execs swear by them, but if you ask for my personal opinion in a more relaxed setting (and I do trust you to some extent), then you'll get a more nuanced and different answer. Same test works for CEH. If they felt insulted and angry, they get an A+ (joking...?). |
|
|
|
| ▲ | throwaway27448 an hour ago | parent | prev | next [-] |
| > every time you try something innovative the "policy people" will climb out of their holes and put random roadblocks in your way, not for the sake of actual security (that would be fine but would require actual engagement) but just to feel important The only innovation I want to see coming out of this powerblock is how to dismantle it. Their potential to benefit humanity sailed many, many years ago. |
|
| ▲ | franze 2 hours ago | parent | prev | next [-] |
| my time at a money startup (debit cards) i pushed to legal and security people to change their behaviour from "how can we prevent this" to "how can we enable this - while still staying with the legal and security framework" worked good after months of hard work and day long meetings. then the heads changed and we were back to square one. but for a moment it was glorious of what was possible. |
| |
| ▲ | fragmede an hour ago | parent [-] | | It's a cultural thing. I loved working at Google because the ethos was "you can do that, and i'll even help you, but have you considered $reason why your idea is stupid/isn't going to work?" |
|
|
| ▲ | 0x3f 3 hours ago | parent | prev | next [-] |
| Work expands to fill the allocated resources in literally everything. This same effect can be seen in software engineering complexity more generally, but also government regulators, etc. No department ever downsizes its own influence or budget. |
|
| ▲ | imiric 2 hours ago | parent | prev | next [-] |
| > I work at a FAANG and every time you try something innovative the "policy people" will climb out of their holes and put random roadblocks in your way What a surprise that someone working in Big Tech would find "pesky" policies to get in their way. These companies have obviously done so much good for the world; imagine what they could do without any guardrails! |
|
| ▲ | Betelbuddy an hour ago | parent | prev | next [-] |
| "I have given root access to my machine to the whole Internet, but these security peasants come with the pitchforks for me..." |
|
| ▲ | aaronrobinson 3 hours ago | parent | prev | next [-] |
| It’s not to feel important, it’s to make others feel they’re important. This is the definition of corporate. |
|
| ▲ | an hour ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | huflungdung 3 hours ago | parent | prev [-] |
| [dead] |