Remix.run Logo
jcgrillo 2 days ago

I don't think fines are enough of an incentive. They're too easy to evade and insufficiently consequential to the people who are actually shipping code. Moreover, making them enormous (as you put it well "valuation-cratering") unfairly punishes people who are not directly responsible for the failure. Instead, like in other engineering disciplines, Engineers need to be personally liable for the consequences of failure. Not necessarily every engineer--not every mechanical engineer needs to be a P.E.--but someone directly responsible for the quality of the work needs to stake their reputation on it, and suffer the consequences when it fails.

adrianN 2 days ago | parent | next [-]

In practice this would mean that you need to show conformance to some kind of security process. The actual outcome of that process is of secondary importance as long as you can show that you’re compliant. Very carefully written process documents _can_ improve things, but my confidence in security processes is low for companies without intrinsic motivation.

I think one can reasonably argue that sufficiently large fines that don’t have a „but we followed iso-xyz“ loophole could produce better outcomes. The difficult part is making the companies care about existential tail risks.

TheRealDunkirk 2 days ago | parent | next [-]

Companies are already following a bunch of standards like SOX, SOC2, HIPAA, etc., and documenting their adherence to checking all of the boxes, but incidents still happen every week.

zingababba a day ago | parent [-]

I say this all the time, corporate security is 100% a game. Unless you are a part of the small group of people literally working on exploit dev you are feeding the security delusion as a service apparatus. Also, contrary to how things started (phrack, L0pht, zines, etc.) your average corporate security drone is almost universally a dull-witted, uninspired specimen.

jcgrillo 2 days ago | parent | prev [-]

Yes, it'll generate a lot of super annoying paperwork. But, hopefully, it will also tighten up software engineering standards. It has worked well in other disciplines.

adrianN 2 days ago | parent [-]

There already are areas where such standards exist, eg safety critical applications in aviation. Arguably the defect rate there _is_ lower, but I still think that this method for achieving this is quite inefficient. And I think that writing aviation software that doesn’t crash is a lot easier to define a process for than for writing software that is difficult to hack.

jcgrillo 2 days ago | parent [-]

The missing piece is the requirement for a certified Professional Engineer to sign off on the system. That decouples the incentives from the corporate objectives, and makes it personal. We need that kind of professional accountability in software, otherwise it'll continue to be bad.

adrianN 2 days ago | parent | next [-]

It is my understanding that personal responsibility already exists in safety critical software development.

jcgrillo 2 days ago | parent [-]

I hadn't actually heard of this, I have never worked on safety critical systems, but doing some googling I found a reference to a "Designated Engineering Representative" in this article about the FAA's DO-178B: https://en.wikipedia.org/wiki/DO-178B

I wasn't able to find much information about U.S. P.E. certification for SWE's, although there is at least one state which offers it. I wasn't able to find any indication anywhere that a compliance process requires a P.E. to sign off on software. That doesn't mean it doesn't exist though!

One major problem is that now that software is "everywhere" it's escaping the boundaries of safety critical standards. Nobody will be killed directly by a bank getting hacked, but it could result in mortal harm to an individual who has their identity stolen. There are all kinds of systems that aren't labeled safety critical in the kinetic sense which are nonetheless very load-bearing. Software which runs on phones, for example. Surely people have died due to buggy phone software. Nobody is being held meaningfully accountable, so it will continue to happen.

To be clear, I'm not saying we should heap a whole lot more pressure onto security teams. Instead we need to find better ways to make security every engineer's professional ethical responsibility--either directly because they're signing off on the system or indirectly because their respected senior colleague is. I just don't see fines getting us there.

adrianN a day ago | parent [-]

I understand your argument, but I feel that good development practices are essentially a company culture question. Putting additional burden on the engineers does not sit well with me as in my experience many engineers already care more about quality and security than their management would like to give them manpower to implement. If we make it easier to have some engineers be scapegoats for management failure this might actually backfire.

2 days ago | parent | prev [-]
[deleted]
lenerdenator 2 days ago | parent | prev [-]

> Moreover, making them enormous (as you put it well "valuation-cratering") unfairly punishes people who are not directly responsible for the failure.

For better or for worse, that's how we've set up our system. The entire point of incorporation is to separate the people working at a company from the company, legally speaking. The most they can really do is fire you.

With respect to valuation-cratering, that's supposed to be considered fair in our system. If a bunch of shareholders elect a board that lets a C-suite operate a company with lax security culture, they're ultimately responsible for the losses they incur. It's only fair; they're the ones getting the profits, too.

I put emphasis on "supposed" because we don't really do this anymore. Instead of expecting shareholders to take the bath, we shift the loss onto the customers, who have to put up with the consequences of identity theft out of their own pockets.

jcgrillo a day ago | parent [-]

In some professions incorporation doesn't shield you from malpractice claims. We have a software quality problem in tech, to the point where buggy, broken software has become completely normal. Security vulnerabilities are so commonplace and routine that companies hire entire departments to respond to security breaches.

We don't have bridges falling down left and right, airplane crashes aren't common, trains don't leave the rails every day. Software is all grown up now, and we need to grow up as a discipline whether we're ready or not. It'll get a lot worse before this happens, but I'm convinced it will happen eventually.

lenerdenator a day ago | parent [-]

That would require us to have the equivalent of a bridge falling down.

I'd say some of that depends on the domain that the software is developed for. I've spent most of my 12 years developing software in healthcare IT. Typically, you don't see too many critical (meaning life-threatening) bugs in EMR/EHR software, which is one of those domains where you'd think it'd be easy to run into that sort of thing. Most of the problems in the domain have to do more with data access being granted or obtained by someone who shouldn't have it. You won't die or get seriously injured as a result of the software, but some guy in a dank basement outside Moscow might know you need your knee replaced.

A lot of that comes from the fact that software for systems that could have a bridge collapse-level of impact are already certified as a part of a larger regulatory scheme for the domain in which the software operates. Healthcare and avionics software instantly come to mind. A lot of people in the KC area make their living writing software for those domains, and while they aren't required to have an engineering license to do so, their wares have to be vetted enough that they have to work at the same level.

You'd need to convince lawmakers to set up a regulating body that tracks business and consumer software for security in the same way we do EHRs for patient safety risks.

jcgrillo a day ago | parent [-]

It's more of a slow burn than any big event you can point to, so it's possible nothing will get done. Hopefully the political pressure will develop as the political class becomes more tech savvy. They will, over time, as younger people who grew up with technology replace the older generation. If the AI bubble pops in a way that causes a financial crisis I think we'll probably see a lot more scrutiny of the tech industry in general, which could lead to progress in quality. Maybe it won't happen, but the current state of things getting shipped before they're done, constant outages, everything being kind of broken all the time... it just doesn't seem sustainable.