| ▲ | Denvercoder9 3 hours ago |
| Two things can be true simultaneously: the Linux kernel ecosystem should have done better at communicating this to their downstreams, and publicly sharing the exploit was irresponsible. It is not the responsibility of the initial reporter to communicate to distributions, but the fact that those responsible failed to do that, doesn't give everybody else a free pass. |
|
| ▲ | da_chicken 2 hours ago | parent | next [-] |
| No, this was already timed disclosure. This is very common and widely accepted. 90+30 is what Google Project Zero uses, for example. The security researcher has met their ethical requirements already. This is entirely on the kernel's security team for failure to communicate downstream. That is their responsibility. The thing is, malicous actors are already monitoring most major projects and doing either source analysis or binary analysis to figure out if changes were made to patch a vulnerability. So, as soon as you actually patch, you really need to disclose because all you're doing by not disclosing the vulnerability is handing the bad actors a free go. The black hats already know. You need to tell the white hats, too, so they can patch. |
| |
| ▲ | Denvercoder9 2 hours ago | parent [-] | | I'm not advocating for delaying the disclosure at all; my point is, if you see your initial disclosure to the kernel didn't go anywhere, to be responsible is to put in a little extra effort to ensure the fix is picked up before you disclose. | | |
| ▲ | da_chicken an hour ago | parent [-] | | "Didn't go anywhere"? The kernel devs patched it! They patched it weeks ago! The kernel security team needs to communicate security problems in their own releases, because that is where the distros are already looking. Requiring the security researcher to do it is insane. Should a security researcher that identifies a vulnerability in electron.js need to identify every possible project using electron.js to communicate with them the vulnerability exists? No. That's absurd. | | |
| ▲ | opello 39 minutes ago | parent | next [-] | | > Should a security researcher that identifies a vulnerability in electron.js need to identify _every_ possible project using electron.js to communicate with them the vulnerability exists? No. That's absurd. But this is a false comparison, right? The scope of "Linux distributions" and "electron apps" are orders of magnitude different. If the reporter spot checked one or two of the most popular distributions to see if fixes had been adopted, that seems like an extra level of nice diligence before publicizing the details. It doesn't seem "insane" as much as "not the most efficient path" as has already been well argued. But it also doesn't seem unreasonable to think in a project of the scope of the Linux kernel, with the potential impact of fairly effective(?) privilege escalation, some extra consideration is reasonable--certainly not "insane" at the very least? | | |
| ▲ | tptacek 35 minutes ago | parent [-] | | They embargoed their vulnerability for 30 days after Linux landed a kernel patch. They did their part. You will always be able to come up with other things they could do for you, and they will always at first blush sound reasonable because of how big and important Linux is, but none of those things will be responsibilities of the vulnerability researcher. Their job is to bring information to light, not to manage downstreams. About half the thread we're on reads as if the commenters believe Xint made this vulnerability. They did not: they alerted you to it. It was already there. | | |
| ▲ | opello 28 minutes ago | parent [-] | | I realize you've been championing this idea in the thread, and I admire it because I also recognize the misdirected blame. Please understand I do not harbor "blame" for the researchers. > Their job is to bring information to light, not to manage downstreams. The researchers are also members of a community in which more harm than is necessary may be dealt by their actions. Nuance must exist in evaluating "reasonable" and "responsible" in the context of actions. | | |
| ▲ | tptacek 24 minutes ago | parent [-] | | I strongly disagree. I want the information. I don't want to wait longer to find out about critical vulnerabilities so that researchers can fully genuflect to whatever Linux distribution norms people on message boards have. Their "actions" were to disclose a vulnerability that already existed and was putting people at risk. It's an absolute good. If it helps you out any, even though my logic was absolutely the same and just as categorical in 2012 as it is today: there are now multiple automated projects that run every merged Linux commit through frontier models to scope them (the status quo ante of the patch) out for exploitability, and then add them to libraries of automatically-exploitable bugs. People here are just mad that they heard about the bug. Serious attackers had this the moment it hit the kernel. This whole debate is kind of farcical. It's about a "real time" response this week to a disaster that struck a month ago. |
|
|
| |
| ▲ | tptacek an hour ago | parent | prev [-] | | In the airless void of a message board thread, of course they should. What does it cost a commenter to demand that? |
|
|
|
|
| ▲ | john_strinlai 3 hours ago | parent | prev | next [-] |
| >publicly sharing the exploit was irresponsible they did it in the established industry standard way that probably every single security researcher you can think of follows (for good reason, i would add). whoever did the marketing on "responsible disclosure" was a genius. tptacek says it much better than me: ""Responsible disclosure" is an Orwellian term cooked up between @Stake and Microsoft and other large vendors to coerce researchers into synchronizing with vendor release schedules." |
| |
| ▲ | Denvercoder9 3 hours ago | parent [-] | | In my world, responsibility is not just checking a box of following industry practice. Responsibility, as Wikipedia puts it on their social responsibility page, is working together with others for the benefit of the community. And yes, sometimes that's a bit larger burden than would ideally be the case. It's an imperfect world, after all -- and let's not forget the disclosure as it happened also placed a larger burden than ideal on people scrambling to patch. And it's not as if I'm asking for a lot of effort. One mail to the security team of a popular distro "hey, we have found this LPE that we'll release with exploit next week, it's patched upstream already in this commit, but you don't seem to have picked it up" would likely have been enough. | | |
| ▲ | da_chicken 2 hours ago | parent [-] | | No. The problem is that vendors and developers have repeatedly shown that if you give them an inch, they take a mile. Look at exactly what happened with BlueHammer this month. The security researcher went full disclosure because Microsoft didn't listen to their reports. Disclosure is vital. It's essential. Because the truth is, if a security researcher has found it, it's extremely likely that it's already been found by either black hats or by state actors. Ignorance is not actually protection from exploitation. The security researcher also has a responsibility to the general public that is still actively using vulnerable software in ignorance. They need to be protected from vendor and developer negligence as well as from exploits. And the only way to protect yourself from an exploit that hasn't yet been patched is to know that it is there. | | |
| ▲ | Denvercoder9 2 hours ago | parent | next [-] | | The situation with e.g. BlueHammer is fundamentally different: there, the only party that could act on it (Microsoft) ignored them. In this case, the parties that could act on it weren't notified at all. I'm also not proposing delaying the disclosure to the general public at all. They already waited 30 days with that, that's fine. Just look a bit further than your checklist of only contacting upstream, and send a mail to the distributions if they haven't picked it up a week or two before. | | |
| ▲ | tptacek an hour ago | parent [-] | | Downstream vulnerability disclosure is a negotiation between the downstreams and the upstreams. It is not the job of a vulnerability researcher to map this out perfectly (or at all). |
| |
| ▲ | throw0101a an hour ago | parent | prev [-] | | > The problem is that vendors and developers have repeatedly shown that if you give them an inch, they take a mile. [citation needed] Is there any evidence that Linux distros (specifically) act in this way? Or a particular distro? | | |
| ▲ | john_strinlai an hour ago | parent | next [-] | | >[citation needed] there is ~3 decades of citations you can look at, spread out over every security mailing list, security conference, etc. that you can think of. one decent start is https://projectzero.google/vulnerability-disclosure-faq.html... "Prior to Project Zero our researchers had tried a number of different disclosure policies, such as coordinated vulnerability disclosure. [...] "We used this model of disclosure for over a decade, and the results weren’t particularly compelling. Many fixes took over six months to be released, while some of our vulnerability reports went unfixed entirely! We were optimistic that vendors could do better, but we weren’t seeing the improvements to internal triage, patch development, testing, and release processes that we knew would provide the most benefit to users. [...] While every vulnerability disclosure policy has certain pros and cons, Project Zero has concluded that a 90-day disclosure deadline policy is currently the best option available for user security. Based on our experiences with using this policy for multiple years across thousands of vulnerability reports, we can say that we’re very satisfied with the results. [...] For example, we observed a 40% faster response time from one software vendor when comparing bugs reported against the same target over a 7-year period, while another software vendor doubled the regularity of their security updates in response to our policy." >Linux distros (specifically) act in this way carving out special exceptions based on nebulous criteria is a bad idea. 90+30 is what has been settled on, and mostly works. | |
| ▲ | da_chicken an hour ago | parent | prev [-] | | Really? Because I would call a situation where the development team fails to appreciate the severity of a security vulnerability and has an established procedure that requires the researcher and not the kernel team to communicate with downstream users is already a major failure of process. Security is not just patching the vulnerability, and it seems that the Linux kernel developers or the Linux kernel security team does not understand that. This is the result of that failure. If this were any other software, we'd be here with pitchforks and torches. The researcher gave the developers timed disclosure, and even waited until after the developers had patched the issue. And... it's still a problem. |
|
|
|
|
|
| ▲ | x4132 2 hours ago | parent | prev [-] |
| so what? we should never disclose anything? this will only result in companies suppressing disclosure and leaving vulnerabilities unpatched. |