Remix.run Logo
thewebguyd 4 days ago

> My work laptop, depending on the period of my life, you could have had access to stuff you wouldn't believe either.

What gets me is everyone acknowledges this, yet HN is full of comments ripping on IT teams for the restrictions & EDR put in place on dev laptops.

We on the ops side have known these risks for years and that knowledge of those risks are what drives organizational security policies and endpoint configuration.

Security is hard, and it is very inconvenient, but it's increasingly necessary.

dghlsakjg 4 days ago | parent | next [-]

I think people rip on EDR and security when 1. They haven’t had it explained why it does what it does or 2. It is process for process sake.

To wit: I have an open ticket right now from an automated code review tool that flagged a potential vulnerability. I and two other seniors have confirmed that it is a false alarm so I asked for permission to ignore it by clicking the ignore button in a separate security ticket. They asked for more details to be added to the ticket, except I don’t have permissions to view the ticket. I need to submit another ticket to get permission to view the original ticket to confirm that no less than three senior developers have validated this as a false alarm, which is information that is already on another ticket. This non-issue has been going on for months at this point. The ops person who has asked me to provide more info won’t accept a written explanation via Teams, it has to be added to the ticket.

Stakeholders will quickly treat your entire security system like a waste of time and resources when they can plainly see that many parts of it are a waste of time and resources.

The objection isn’t against security. It is against security theater.

MichaelZuo 4 days ago | parent [-]

This sounds sensible for the “ops person”?

It might not be sensible for the organization as a whole, but there’s no way to determine that conclusively, without going over thousands of different possibilities, edge cases, etc.

dghlsakjg 4 days ago | parent [-]

What about this sounds sensible?

I have already documented, in writing, in multiple places, that the automated software has raised a false alarm, as well as providing a piece of code demonstrating that the alert was wrong. They are asking me to document it in an additional place that I don't have access to, presumably for perceived security reasons? We already accept that my reasoning around the false alarm is valid, they just have buried a simple resolution beneath completely stupid process. You are going to get false alarms, if it takes months to deal with a single one, the alarm system is going to get ignored, or bypassed. I have a variety of conflicting demands on my attention.

At the same time, when we came under a coordinated DDOS attack from what was likely a political actor, security didn't notice the millions of requests coming from a country that we have never had a single customer in. Our dev team brought it to their attention where they, again, slowed everything down by insisting on taking part in the mitigation, even though they couldn't figure out how to give themselves permission to access basic things like our logging system. We had to devote one of our on calls to walking them through submitting access tickets, a process presumably put in place by a security team.

I know what good security looks like, and I respect it. Many people have to deal with bad security on a regular basis, and they should not be shamed for correctly pointing out that it is terrible.

MichaelZuo 3 days ago | parent [-]

If your sufficiently confident there can be no negative consequences whatsoever… then just email that person’s superiors and cc your superiors to guarantee in writing you’ll take responsibility?

The ops person obviously can’t do that on your behalf, at least not in any kind of organizational setup I’ve heard of.

dghlsakjg 3 days ago | parent [-]

As the developer in charge of looking at security alerts for this code base, I already am responsible, which is why I submitted the exemption request in the first place. As it is, this alert has been active for months and no one from security has asked about the alert, just my exemption request, so clearly the actual fix (disregarding or code changes) are less important than the process and alert itself.

So the solution to an illogical, kafkaesque security process is to bypass the process entirely via authority?

You are making my argument for me.

This is exactly why people don’t take security processes seriously, and fight efforts to add more security processes.

MichaelZuo 3 days ago | parent [-]

So you agree with me the ops person is behaving sensibly given real life constraints?

Edit: I didn’t comment on all those other points, so it seems irrelevant to the one question I asked.

dghlsakjg 3 days ago | parent [-]

Absolutely not.

Ops are the ones who imposed those constraints. You can't impose absurd constraints and then say you are acting reasonable by abiding by your own absurd constraints.

MichaelZuo 3 days ago | parent [-]

How do you even know it was a single individual’s decision, let alone who exactly imposed the constraints?

dghlsakjg 3 days ago | parent [-]

I don't, and I never said that.

I'm not dumping on the ops person, but the ops and security team's processes. If you as a developer showed up to a new workplace and the process was that for every code change you had to print out a diff and mail a hard copy to the committee for code reviews, you would be totally justified in calling out the process as needlessly elaborate. Anyone could rightly say that your processes are increasing friction while not actually serving the purpose of having code reviewed by peers. You as a developer have a responsibility to point out that the current process serves no one and should be changed. That's what good security and ops people do too.

In the real world case I am talking about, we can easily foresee that the end result is that the exemption will be allowed, and there will be no security impact. In no way does the process at all contribute to that, and every person involved knows it.

My original post was about how people dislike security when it is actually security theater. That is what is going on here. We already know how this issue ends and how that can be accomplished (document the false alarm, and click the ignore button), and have already done the important part of documenting the issue for posterity.

The process could be: you are a highly paid developer who takes security training and has access to highly sensitive systems so we trust your judgment, when you and your peers agree that this isn't an issue, write that down in the correct place, click the ignore button and move on with your work.

All of the faff of contacting different fiefdoms and submitting tickets does nothing to contribute to the core issue or resolution, and certainly doesn't enhance security. If anything, security theater like this leads to worse security since people will try to find shortcuts or ways of just not handling issues.

the8472 4 days ago | parent | prev | next [-]

At least at $employer a good portion of those systems are intended to stop attacks on management and the average office worker. The process is not geared towards securing dev(arbitrary code execution)-ops(infra creds). They're not even handing out hardware security keys for admin accounts. I use my own, some other devs just use TOTP authenticator apps on their private phones.

All their EDR crud runs on Windows, but as a dev I'm allowed to run WSL but the tools do not reach inside WSL so if that gets compromised they would be none the wiser.

There is some instrumentation for linux servers and cloud machines, but that too is full of blind spots.

And as a sibling comment says, a lot of the policies are executed without anyone being able to explain their purpose, being able to grant "functionally equivalent security" exceptions or them even making sense in certain contexts. It feels like dealing with mindless automatons, even though humans are involved. For example a thing that happened a while ago: We were using scrypt as KDF, but their scanning flagged it as unknown password encryption and insisted that we should use SHA2 as a modern, secure hashing function. Weeks of long email threads, escalation and several managers suggesting "just change it to satisfy them" followed. That's a clear example of mindless rule-following making a system less secure.

Blocking remote desktop forwarding of security keys also is a fun one.

balls187 4 days ago | parent | prev [-]

Funny, I read that quote, and assumed it meant something unsavory, and not say, root access to an AWS account.