| ▲ | littlestymaar 3 days ago | |||||||||||||||||||
> especially in such a complex codebase You accidentally put the finger on the key point, emphasis mine. When you have a memory-unsafe language, the complexity of the whole codebase impact your ability to uphold memory-related invariants. But unsafe block are, by definition, limited in scope and assuming you design your codebase properly, they shouldn't interact with other unsafe blocks in a different module. So the complexity related to one unsafe block is in fact contained to his own module, and doesn't spread outside. And that makes everything much more tractable since you never have to reason about the whole codebase, but only about a limited scope everytime. | ||||||||||||||||||||
| ▲ | menaerus 3 days ago | parent | next [-] | |||||||||||||||||||
No, this is just an example of confirmation bias. You're given a totally unrealistic figure of 1 vuln per 200K/5M LoC and now you're hypothesizing why that could be so. Google, for anyone unbiased, lost the credibility when they put this figure into the report. I wonder what was their incentive for doing so. > But unsafe block are, by definition, limited in scope and assuming you design your codebase properly, they shouldn't interact with other unsafe blocks in a different module. So the complexity related to one unsafe block is in fact contained to his own module, and doesn't spread outside. And that makes everything much more tractable since you never have to reason about the whole codebase, but only about a limited scope everytime. For anyone who has written low-level code with substantial complexity knows that this is just a wishful thinking. In such code, abstractions fall-apart and "So the complexity related to one unsafe block is in fact contained to his own module, and doesn't spread outside" is just wrong as I explained in my other comment here - UB taking place in unsafe section will transcend into the rest of the "safe" code - UB is not "caught" or put into the quarantine with some imaginative safety net at the boundary between the safe and unsafe sections. | ||||||||||||||||||||
| ||||||||||||||||||||
| ▲ | Xylakant 3 days ago | parent | prev [-] | |||||||||||||||||||
That's the other interesting observation you can draw from that report. The numbers contained in the first parts about review times, rollback rates, etc. are broken down by change size. And the gap widens for larger changes. This indicates that Rusts language features support reasoning about complex changesets. It's not obviously clear to me which features are the relevant ones, but my general observation is that lifetimes, unsafe blocks, the borrow checker allow people to reason about code in smaller chunks. For example knowing that there's only one place where a variable may be mutated supports understanding that at the same time, no other code location may change it. | ||||||||||||||||||||