| ▲ | gopiandcode 4 hours ago |
| Repeating myself, when we speak of bugs in a verified software system, I think it's fair to consider the entire binary a fair target. If a buffer overflow causes the system to be exploited and all your bitcoins to be stolen, I don't think the fact that the bug being in the language runtime is going to be much consolation. Especially if the software you were running was advertised as formally verified as free of bugs. Second, there was a bug in the code. Maybe not a functional correctness bug, but I, along with many and most end users, would consider a crashing program buggy. Maybe we just have different tastes or different standards on what we consider an acceptable level of software quality. W.r.t people running Lean in production, you'd be surprised... |
|
| ▲ | Someone 5 minutes ago | parent | next [-] |
| [delayed] |
|
| ▲ | danparsonson 2 hours ago | parent | prev | next [-] |
| We're not speaking about bugs in a verified system so much as writing articles making specific claims about that. Surely if we're at the level of precision of formal verification, it's incumbent upon us to be precise about the nature of a problem with it, no? "Lean proved this program correct and then I found a bug" heavily implies a flaw in the proof, not a flaw in the runtime (which to my mind would also be a compelling statement, for the reasons you describe). |
| |
| ▲ | gopiandcode 2 hours ago | parent [-] | | Sorry, I'm not sure I follow. We are talking about bugs in a verified system, that is, in this case, a verified implementation of a zlib-based compression tool. Did it have bugs? yes. Several in fact. I'd recommend reading the article for a detailed listing of the bugs in the tool. | | |
| ▲ | danparsonson an hour ago | parent | next [-] | | Further to my earlier reply - a more succinct way to look at it might be: - When they fix the run time, bug A goes away. So the proof still holds and the zlib code is still correct. - When they add a system of proofs for the parser and modify that, then bug B goes away. So the proof still holds and the zlib code is still correct; and now more of the library is proven correct. The formulation of the title is "I was told X but that's not true"... but that's not true. You were told X, and X is true, but you found Y and Z which are also important. | |
| ▲ | danparsonson an hour ago | parent | prev | next [-] | | You give the answers to this in the artcle: > The most substantial finding was a heap buffer overflow! but, not in lean-zip's code, but in the Lean runtime itself. (emphasis mine) > The OOM denial-of-service is straightforward: the archive parser was never verified. (me again) "Lean proved this program correct" vs "I found bugs in the parts that Lean didn't prove was correct". The failure was not in Lean's proof (which as I said is heavily implied by the title), but in ancillary or unverified code. Do I misunderstand how Lean works? I am by no means an expert (or even an amateur) on Lean or formal systems in general. Surely the first class of bug could be found in any code that uses the framework, and the second class of bug could happen to any system that isn't proven? Does that make sense? Otherwise where's the boundary? If you find a bug in the OS, does that mean Lean failed somehow? How about the hardware? If your definition of a 'formally verified system' goes beyond the code being verified and the most important thing is whether or not you can make it crash, then the OS and the hardware are also part of the system. Of course bugs are important to users regardless of the cause, but the audience for your article seems to be software engineers and as a software engineer, your article was interesting and you found something useful, but the title was misleading; that's all I'm saying. | |
| ▲ | vntok an hour ago | parent | prev [-] | | Say you study some piece of software. And it happens that it has an automated suite of tests. And it happens that some files aren't covered by the test suite. And you happen to find a bug in one of those files that were not covered. Would you publish a blog post titled "the XXX test suite proved there was no bug. And then I found one"? It would be a bit silly, right? | | |
| ▲ | necovek 30 minutes ago | parent [-] | | A test suite never proves anything except that the code works for the test cases in it. But yes, it would be equivalent to stating "No tests failed but I found a bug" and one would correctly deduce that test coverage is insufficient. |
|
|
|
|
| ▲ | saithound 2 hours ago | parent | prev | next [-] |
| > Repeating myself, when we speak of bugs in a verified software system, I think it's fair to consider the entire binary a fair target. Yes, and that would be relevant if this was a verified software system. But it wasn't: the system consisted of a verified X and unverified Y, and there were issues in the unverified Y. The article explicitly acknowledges this: "The two bugs that were found both sat outside the boundary of what the proofs cover." |
|
| ▲ | monocasa 2 hours ago | parent | prev | next [-] |
| Totally agreed. For instance that's why sel4 just throws the whole binary at the proof engine. That takes any runtime (minimal in sel4's case) and compiler (not nearly as minimal) out of the TCB. |
|
| ▲ | dyauspitr 2 hours ago | parent | prev [-] |
| Well then formally verify the language system. I’m not sure what the confusion is. They didn’t say the whole system is formally verified. |