| |
| ▲ | dghlsakjg 4 days ago | parent [-] | | What about this sounds sensible? I have already documented, in writing, in multiple places, that the automated software has raised a false alarm, as well as providing a piece of code demonstrating that the alert was wrong. They are asking me to document it in an additional place that I don't have access to, presumably for perceived security reasons? We already accept that my reasoning around the false alarm is valid, they just have buried a simple resolution beneath completely stupid process. You are going to get false alarms, if it takes months to deal with a single one, the alarm system is going to get ignored, or bypassed. I have a variety of conflicting demands on my attention. At the same time, when we came under a coordinated DDOS attack from what was likely a political actor, security didn't notice the millions of requests coming from a country that we have never had a single customer in. Our dev team brought it to their attention where they, again, slowed everything down by insisting on taking part in the mitigation, even though they couldn't figure out how to give themselves permission to access basic things like our logging system. We had to devote one of our on calls to walking them through submitting access tickets, a process presumably put in place by a security team. I know what good security looks like, and I respect it. Many people have to deal with bad security on a regular basis, and they should not be shamed for correctly pointing out that it is terrible. | | |
| ▲ | MichaelZuo 3 days ago | parent [-] | | If your sufficiently confident there can be no negative consequences whatsoever… then just email that person’s superiors and cc your superiors to guarantee in writing you’ll take responsibility? The ops person obviously can’t do that on your behalf, at least not in any kind of organizational setup I’ve heard of. | | |
| ▲ | dghlsakjg 3 days ago | parent [-] | | As the developer in charge of looking at security alerts for this code base, I already am responsible, which is why I submitted the exemption request in the first place. As it is, this alert has been active for months and no one from security has asked about the alert, just my exemption request, so clearly the actual fix (disregarding or code changes) are less important than the process and alert itself. So the solution to an illogical, kafkaesque security process is to bypass the process entirely via authority? You are making my argument for me. This is exactly why people don’t take security processes seriously, and fight efforts to add more security processes. | | |
| ▲ | MichaelZuo 3 days ago | parent [-] | | So you agree with me the ops person is behaving sensibly given real life constraints? Edit: I didn’t comment on all those other points, so it seems irrelevant to the one question I asked. | | |
| ▲ | dghlsakjg 3 days ago | parent [-] | | Absolutely not. Ops are the ones who imposed those constraints. You can't impose absurd constraints and then say you are acting reasonable by abiding by your own absurd constraints. | | |
| ▲ | MichaelZuo 3 days ago | parent [-] | | How do you even know it was a single individual’s decision, let alone who exactly imposed the constraints? | | |
| ▲ | dghlsakjg 3 days ago | parent [-] | | I don't, and I never said that. I'm not dumping on the ops person, but the ops and security team's processes. If you as a developer showed up to a new workplace and the process was that for every code change you had to print out a diff and mail a hard copy to the committee for code reviews, you would be totally justified in calling out the process as needlessly elaborate. Anyone could rightly say that your processes are increasing friction while not actually serving the purpose of having code reviewed by peers. You as a developer have a responsibility to point out that the current process serves no one and should be changed. That's what good security and ops people do too. In the real world case I am talking about, we can easily foresee that the end result is that the exemption will be allowed, and there will be no security impact. In no way does the process at all contribute to that, and every person involved knows it. My original post was about how people dislike security when it is actually security theater. That is what is going on here. We already know how this issue ends and how that can be accomplished (document the false alarm, and click the ignore button), and have already done the important part of documenting the issue for posterity. The process could be: you are a highly paid developer who takes security training and has access to highly sensitive systems so we trust your judgment, when you and your peers agree that this isn't an issue, write that down in the correct place, click the ignore button and move on with your work. All of the faff of contacting different fiefdoms and submitting tickets does nothing to contribute to the core issue or resolution, and certainly doesn't enhance security. If anything, security theater like this leads to worse security since people will try to find shortcuts or ways of just not handling issues. |
|
|
|
|
|
|
|