Remix.run Logo
john_strinlai 4 hours ago

i have no problem with disclosing a vulnerability 30 days after its patched in the thing you reported to. (in fact, for those unaware, this is the same policy that google's project zero uses: "90+30" https://projectzero.google/vulnerability-disclosure-policy.h...)

the real problem is:

>It's also worrying that it seems there's no communication between the kernel security team and distribution maintainers.

the reporter should not be the one responsible for reporting separately to every single downstream of the thing they found a vuln in.

what should be happening, as you allude to, is a communication channel between the kernel security team and distribution maintainers. they are in a much better position to coordinate and communicate with the maintainers than random reporters are.

the minute the patch landed in the kernel, a notification should have gone out from the kernel team to a curated list of distro security folk that communicated the importance of the patch, and that the public disclosure would be in 30 days.

fresh_broccoli an hour ago | parent | next [-]

>the reporter should not be the one responsible for reporting separately to every single downstream of the thing they found a vuln in.

Not "separately to every single downstream", there is the "linux-distros" mailing list for disclosures: https://oss-security.openwall.org/wiki/mailing-lists/distros

This random blogpost from 2022 serves as a proof that disclosing kernel vulnerabilities to the distros list is a well-known practice: https://sam4k.com/a-dummys-guide-to-disclosing-linux-kernel-...

I agree it's a shame that the process isn't more streamlined and the kernel developers aren't forwarding the reports to the distros list.

tptacek an hour ago | parent | next [-]

It is literally not the vulnerability researcher's problem to solve or address this.

troad 14 minutes ago | parent | prev [-]

Why is it the job of the kernel to notify the distros? Why isn't it the job of the distros to keep up on upstream security disclosures?

Expecting a FOSS project to go track down all of its (millions of?) users seems like a very unreasonable expectation, and is well outside of their scope of responsibility.

People have gotten so used to the Github flavour of free-labour, social-network-style FOSS that they've forgotten what all those LICENSE files actually say, which is to make it explicitly clear that the devs are not responsible to you for your issues, up to and including the software setting your house on fire. If you don't like it, you don't have to use it.

qotgalaxy 6 minutes ago | parent [-]

[dead]

staticassertion 3 hours ago | parent | prev | next [-]

> they are in a much better position to coordinate and communicate with the maintainers than random reporters are.

They openly refuse to do this and have been given authority by MITRE to work against any such process.

john_strinlai 3 hours ago | parent [-]

right, which is why it is confusing that the animosity is aimed at the reporters rather than the kernel security team.

expedition32 37 minutes ago | parent [-]

Not really confusing. Linux is a sacred cow.

There would be a lot of people gloating if this happened to MS.

ori_b 4 hours ago | parent | prev | next [-]

If the maintainers were unresponsive, sure -- but it seems slightly hard to buy that a responsible reporter trying to make a big splash and a good impression wouldn't first check "did this make it out to the distros?" before making sysadmin's days real shitty, even if technically they could point fingers at other parties. At which point, if they're paying paying any attention at all to what they reported, they may have realized that a mistake was made.

john_strinlai 4 hours ago | parent [-]

its an industry standard disclosure process. 90 days after reporting, or 30 days after the patch lands, the vuln is disclosed.

the linux kernel team is in a 10000% better position to communicate to and coordinate their downstreams. it seems completely backwards to me to suggest that the reporter should be responsible for figuring out every possible downstream and opening up separate reports to each of them.

the kernel team should have a process/channel to say "this is important! disclosure is in 30 days" that is received by distro security teams. because this is not the first or last time the kernel will have a local privilege escalation. hoping that every reporter, forever in the future, will take the onus on themselves is a recipe for disapointment.

ori_b 3 hours ago | parent | next [-]

Yes, it's just incompetence from everyone involved, not malice. The company making the disclosure doesn't actually care, and the kernel processes are ineffective.

tptacek 3 hours ago | parent [-]

No, it's incompetence from everyone involved except the company making the disclosure, which, despite the fact that the existing norms are not in fact binding (like people downthread seem to believe), they followed.

ori_b 3 hours ago | parent [-]

Really? It seems very odd to not check in on the status of the fixes, even if it's technically possible to pass the blame to other people.

Even if the only purpose of looking at the status to make yourself look good in marketing materials, it's surprising that it didn't happen.

9question1 2 hours ago | parent | next [-]

`it's technically possible to pass the blame to other people` presupposes that the blame belongs to the reporter unless effort is taken to "shift" it. This is just an inaccurate worldview as many people have pointed out clearly in this discussion. If there's a vulnerability in software the blame lies with people who wrote and maintain the software, not someone who finds and discloses a vulnerability. The person who should `check in on the status of the fixes` is the person who owns the thing being fixed, which is very much the kernel and distro maintainers and not the security researcher. It is you who are willfully shifting blame to an innocent party

Joker_vD 2 hours ago | parent | prev [-]

One of the reasons this unavoidable deadline was invented, is that the alternative is that one company (or all of them) can simply decide to ignore the vuln report, and then the vulnerability will stay forever undisclosed and forever out there in the wild. And prisoner's dilemma suggests that most companies would chose "do nothing" in this scenario: they don't have to do anything, and if the vuln stays undisclosed, it probably won't be exploited anyhow. Win-win!

ori_b 2 hours ago | parent [-]

I'm confused. Can you explain how this applies to the current situation, where no vuln reports were submitted to the groups responsible for distributing patches?

john_strinlai 2 hours ago | parent | next [-]

>where no vuln reports were submitted to the groups responsible for distributing patches?

the vulnerability report was submitted to the kernel security team and appropriate kernel maintainers. those are the people responsible for patching the kernel, which they did 30 days ago.

ori_b 2 hours ago | parent [-]

I see, may the people who are responsible for the infrastructure you depend on be less concerned about shifting blame than you are.

john_strinlai 2 hours ago | parent [-]

imagine you use a dependency in your code. like left-pad. and some vulnerability is found in left-pad.

is the reporter of that vulnerability responsible for finding and submitting a vulnerability report to every single piece of software that uses left-pad? all ~millions of them?

or do they submit the report to left-pad, get them to fix it at the source, and trust that the people relying on left-pad will update their software like they should when they see a security-relevant update is available?

Joker_vD 24 minutes ago | parent | prev [-]

> the groups responsible for distributing patches?

Those groups don't exist, to my knowledge. And probably can't, realistically speaking.

bragr 3 hours ago | parent | prev [-]

The problem is that if you make too big of a deal about a particular patch, then someone just reverse engineers the vuln from the fix and your responsible disclosure period doesn't exist anymore.

Gentoo has to take some blame too for not keeping all the kernels they maintain patched in a timely way.

tremon 27 minutes ago | parent | next [-]

> Gentoo has to take some blame too for not keeping all the kernels they maintain patched in a timely way.

How do you figure that? From what I could tell from the earlier post, the fix has only been backported to 6.18 and later, and as TFA indicates the distro's were not informed of the security implications of this fix. All distro's shipping a major kernel version from more than a year ago -- and that includes all LTS kernels -- are vulnerable, regardless of how "timely" their patch schedules follow upstream.

john_strinlai 3 hours ago | parent | prev [-]

you minimize this with the curated contact list.

the baddies are looking at every patch anyways.

Denvercoder9 3 hours ago | parent | prev [-]

Two things can be true simultaneously: the Linux kernel ecosystem should have done better at communicating this to their downstreams, and publicly sharing the exploit was irresponsible.

It is not the responsibility of the initial reporter to communicate to distributions, but the fact that those responsible failed to do that, doesn't give everybody else a free pass.

da_chicken 3 hours ago | parent | next [-]

No, this was already timed disclosure. This is very common and widely accepted. 90+30 is what Google Project Zero uses, for example. The security researcher has met their ethical requirements already. This is entirely on the kernel's security team for failure to communicate downstream. That is their responsibility.

The thing is, malicous actors are already monitoring most major projects and doing either source analysis or binary analysis to figure out if changes were made to patch a vulnerability. So, as soon as you actually patch, you really need to disclose because all you're doing by not disclosing the vulnerability is handing the bad actors a free go. The black hats already know. You need to tell the white hats, too, so they can patch.

Denvercoder9 2 hours ago | parent [-]

I'm not advocating for delaying the disclosure at all; my point is, if you see your initial disclosure to the kernel didn't go anywhere, to be responsible is to put in a little extra effort to ensure the fix is picked up before you disclose.

da_chicken 2 hours ago | parent [-]

"Didn't go anywhere"? The kernel devs patched it! They patched it weeks ago! The kernel security team needs to communicate security problems in their own releases, because that is where the distros are already looking.

Requiring the security researcher to do it is insane. Should a security researcher that identifies a vulnerability in electron.js need to identify every possible project using electron.js to communicate with them the vulnerability exists? No. That's absurd.

opello 43 minutes ago | parent | next [-]

> Should a security researcher that identifies a vulnerability in electron.js need to identify _every_ possible project using electron.js to communicate with them the vulnerability exists? No. That's absurd.

But this is a false comparison, right? The scope of "Linux distributions" and "electron apps" are orders of magnitude different. If the reporter spot checked one or two of the most popular distributions to see if fixes had been adopted, that seems like an extra level of nice diligence before publicizing the details.

It doesn't seem "insane" as much as "not the most efficient path" as has already been well argued. But it also doesn't seem unreasonable to think in a project of the scope of the Linux kernel, with the potential impact of fairly effective(?) privilege escalation, some extra consideration is reasonable--certainly not "insane" at the very least?

tptacek 38 minutes ago | parent [-]

They embargoed their vulnerability for 30 days after Linux landed a kernel patch. They did their part. You will always be able to come up with other things they could do for you, and they will always at first blush sound reasonable because of how big and important Linux is, but none of those things will be responsibilities of the vulnerability researcher. Their job is to bring information to light, not to manage downstreams.

About half the thread we're on reads as if the commenters believe Xint made this vulnerability. They did not: they alerted you to it. It was already there.

opello 31 minutes ago | parent [-]

I realize you've been championing this idea in the thread, and I admire it because I also recognize the misdirected blame. Please understand I do not harbor "blame" for the researchers.

> Their job is to bring information to light, not to manage downstreams.

The researchers are also members of a community in which more harm than is necessary may be dealt by their actions. Nuance must exist in evaluating "reasonable" and "responsible" in the context of actions.

tptacek 27 minutes ago | parent [-]

I strongly disagree. I want the information. I don't want to wait longer to find out about critical vulnerabilities so that researchers can fully genuflect to whatever Linux distribution norms people on message boards have. Their "actions" were to disclose a vulnerability that already existed and was putting people at risk. It's an absolute good.

If it helps you out any, even though my logic was absolutely the same and just as categorical in 2012 as it is today: there are now multiple automated projects that run every merged Linux commit through frontier models to scope them (the status quo ante of the patch) out for exploitability, and then add them to libraries of automatically-exploitable bugs.

People here are just mad that they heard about the bug. Serious attackers had this the moment it hit the kernel. This whole debate is kind of farcical. It's about a "real time" response this week to a disaster that struck a month ago.

opello 3 minutes ago | parent [-]

I do get that, this era of automation is too responsive to not go public to provoke action. I think I might just be wistful of an era in which the alternate path might have made a difference. Sorry to pile on.

tptacek an hour ago | parent | prev [-]

In the airless void of a message board thread, of course they should. What does it cost a commenter to demand that?

john_strinlai 3 hours ago | parent | prev | next [-]

>publicly sharing the exploit was irresponsible

they did it in the established industry standard way that probably every single security researcher you can think of follows (for good reason, i would add).

whoever did the marketing on "responsible disclosure" was a genius.

tptacek says it much better than me: ""Responsible disclosure" is an Orwellian term cooked up between @Stake and Microsoft and other large vendors to coerce researchers into synchronizing with vendor release schedules."

Denvercoder9 3 hours ago | parent [-]

In my world, responsibility is not just checking a box of following industry practice. Responsibility, as Wikipedia puts it on their social responsibility page, is working together with others for the benefit of the community. And yes, sometimes that's a bit larger burden than would ideally be the case. It's an imperfect world, after all -- and let's not forget the disclosure as it happened also placed a larger burden than ideal on people scrambling to patch.

And it's not as if I'm asking for a lot of effort. One mail to the security team of a popular distro "hey, we have found this LPE that we'll release with exploit next week, it's patched upstream already in this commit, but you don't seem to have picked it up" would likely have been enough.

da_chicken 2 hours ago | parent [-]

No.

The problem is that vendors and developers have repeatedly shown that if you give them an inch, they take a mile. Look at exactly what happened with BlueHammer this month. The security researcher went full disclosure because Microsoft didn't listen to their reports.

Disclosure is vital. It's essential. Because the truth is, if a security researcher has found it, it's extremely likely that it's already been found by either black hats or by state actors. Ignorance is not actually protection from exploitation.

The security researcher also has a responsibility to the general public that is still actively using vulnerable software in ignorance. They need to be protected from vendor and developer negligence as well as from exploits. And the only way to protect yourself from an exploit that hasn't yet been patched is to know that it is there.

Denvercoder9 2 hours ago | parent | next [-]

The situation with e.g. BlueHammer is fundamentally different: there, the only party that could act on it (Microsoft) ignored them. In this case, the parties that could act on it weren't notified at all.

I'm also not proposing delaying the disclosure to the general public at all. They already waited 30 days with that, that's fine. Just look a bit further than your checklist of only contacting upstream, and send a mail to the distributions if they haven't picked it up a week or two before.

tptacek an hour ago | parent [-]

Downstream vulnerability disclosure is a negotiation between the downstreams and the upstreams. It is not the job of a vulnerability researcher to map this out perfectly (or at all).

throw0101a 2 hours ago | parent | prev [-]

> The problem is that vendors and developers have repeatedly shown that if you give them an inch, they take a mile.

[citation needed]

Is there any evidence that Linux distros (specifically) act in this way? Or a particular distro?

john_strinlai an hour ago | parent | next [-]

>[citation needed]

there is ~3 decades of citations you can look at, spread out over every security mailing list, security conference, etc. that you can think of.

one decent start is https://projectzero.google/vulnerability-disclosure-faq.html...

"Prior to Project Zero our researchers had tried a number of different disclosure policies, such as coordinated vulnerability disclosure. [...] "We used this model of disclosure for over a decade, and the results weren’t particularly compelling. Many fixes took over six months to be released, while some of our vulnerability reports went unfixed entirely! We were optimistic that vendors could do better, but we weren’t seeing the improvements to internal triage, patch development, testing, and release processes that we knew would provide the most benefit to users.

[...]

While every vulnerability disclosure policy has certain pros and cons, Project Zero has concluded that a 90-day disclosure deadline policy is currently the best option available for user security. Based on our experiences with using this policy for multiple years across thousands of vulnerability reports, we can say that we’re very satisfied with the results.

[...]

For example, we observed a 40% faster response time from one software vendor when comparing bugs reported against the same target over a 7-year period, while another software vendor doubled the regularity of their security updates in response to our policy."

>Linux distros (specifically) act in this way

carving out special exceptions based on nebulous criteria is a bad idea. 90+30 is what has been settled on, and mostly works.

da_chicken an hour ago | parent | prev [-]

Really?

Because I would call a situation where the development team fails to appreciate the severity of a security vulnerability and has an established procedure that requires the researcher and not the kernel team to communicate with downstream users is already a major failure of process. Security is not just patching the vulnerability, and it seems that the Linux kernel developers or the Linux kernel security team does not understand that.

This is the result of that failure.

If this were any other software, we'd be here with pitchforks and torches. The researcher gave the developers timed disclosure, and even waited until after the developers had patched the issue. And... it's still a problem.

x4132 2 hours ago | parent | prev [-]

so what? we should never disclose anything? this will only result in companies suppressing disclosure and leaving vulnerabilities unpatched.