| ▲ | pixl97 2 days ago |
| > thought static content should just be HTTP Yep, I've seen that argument so many times and it should never make sense to anyone that understands MITM. The only way it could possibly work is if the static content were signed somehow, but then you need another protocol the browser and you need a way to exchange keys securely, for example like signed RPMs. It would be less expensive as the encryption happens once, but is it worth having yet another implementation? |
|
| ▲ | drob518 2 days ago | parent | next [-] |
| The argument doesn’t even make sense for static content ignoring mitm attacks. |
| |
| ▲ | pixl97 2 days ago | parent [-] | | There is no such thing as static content. There is only content. Bits are sent to your browser which it then applies the DOM to. If you want to ensure the bits that were sent from the server to your browser they must be signed in some method. | | |
|
|
| ▲ | RiverCrochet 2 days ago | parent | prev | next [-] |
| Well there is the integrity atttribute. https://www.w3schools.com/Tags/att_script_integrity.asp |
| |
| ▲ | pixl97 2 days ago | parent [-] | | Pretty useless in this case if I control the stream going to you. The main page defining the integrity would have to be encrypted. Maybe you could have a mixed use case page in the browser where you had your secure context, then a sub context of unencrypted protected objects, that could possibly increase caching. With that said, looks like another fun hole browser makers would be chasing every year or so. |
|
|
| ▲ | bigstrat2003 2 days ago | parent | prev | next [-] |
| > Yep, I've seen that argument so many times and it should never make sense to anyone that understands MITM. Rather, it's that most people simply don't need to care about MITM. It's not a relevant attack for most content that can be reasonably served over HTTP. The goal isn't to eliminate every security threat possible, it's to eliminate the ones that are actually a problem for your use case. |
| |
| ▲ | kbolino 2 days ago | parent | next [-] | | MITM is a very real threat in any remotely public place. Coffee shop, airport, hotel, municipal WAN, library, etc. I honestly wouldn't put that much trust in a lot of residential/commercial broadband setups or hosting/colocation providers either. It does not matter what is intended to be served, because it can be replaced with anything else. Innocuous blog? Transparently replaced with a phishing site. Harmless image? Rewritten to appear the same but with a zero-day exploit injected. There's no such thing as "not worth the effort to secure" because neither the site itself nor its content matters, only the network path from the site to the user, which is not under the full control of either party. These need not be, and usually aren't, targeted attacks; they'll hit anything that can be intercepted and modified, without a care for what it's meant to be, where it's coming from, or who it's going to. Viewing it is an A-to-B interaction where A is a good-natured blogger and B is a tech-savvy reader, and that's all there is to it, is archaic and naive to the point of being dangerous. It is really an A-to-Z interaction where even if A is a good-natured blogger and Z is a tech-savvy user, parties B through Y all get to have a crack at changing the content. Plain HTTP is a protocol for a high-trust environment and the Internet has not been such a place for a very long time. It is unfortunate that party A (the site) must bear the brunt of the security burden, but that's the state of things today. There were other ways to solve this problem but they didn't get widespread adoption. | |
| ▲ | pixl97 2 days ago | parent | prev [-] | | MITM is a risk to everyone. End of story. The browser content model knows nothing if the data it's receiving is static or not. ISPs had already shown time and again they'd inject content into http streams for their own profit. BGP attacks routed traffic off to random places. Simply put the modern web should be zero trust at all. |
|
|
| ▲ | XorNot 2 days ago | parent | prev [-] |
| For catching purposes form content distribution an unencrypted signed protocol would've helped a lot. Every Linux packaging format having to bake one in via GPG is a huge pain. |
| |