| ▲ | achierius 3 days ago |
| The other 15/16 attempts would crash though, and a bug that unstable is not practically usable in production, both because it would be obvious to the user / send diagnostics upstream and because when you stack a few of those 15/16s together it's actually going to take quite a while to get lucky. |
|
| ▲ | strcat 3 days ago | parent | next [-] |
| Typically 14/15 since a tag is normally reserved for metadata, free data, etc. Linux kernel reserves multiple for the internal kernel usage since it was introduced upstream as more of a hardware accelerated debugging feature even though it's very useful for hardening. |
| |
| ▲ | achierius 3 days ago | parent | next [-] | | It's more complicated than that, so I just use 15/16 to gesture at the general idea. E.g. some strategies for ensuring adjacent tags don't collide can include splitting the tags-range in half and tagging from one or the other based on the parity of an object within its slab allocation region. But even 1/7 is still solid. | |
| ▲ | loeg 3 days ago | parent | prev [-] | | 93%, 94%, it's not a huge difference. |
|
|
| ▲ | pizlonator 3 days ago | parent | prev [-] |
| I get that. That’s why I’m adding the caveat that this doesn’t protect you against attackers that are in a position to try multiple times |
| |
| ▲ | zarzavat 3 days ago | parent [-] | | Detection is 14/15ths of the battle. Forcing attackers to produce a brand new exploit chain every few weeks massively increases attack cost which could make it uneconomical except for national security targets. | | |
| ▲ | pizlonator 2 days ago | parent [-] | | It will be really interesting to see how well that part of the story works out! What we're essentially saying is that evading detection is now 14/15 of the battle, from the attacker's perspective. Those people are very clever |
|
|