Remix.run Logo
tptacek 4 days ago

There's minimal security in an unauthenticated encrypted connection, because an attacker can just MITM it.

SoftTalker 3 days ago | parent | next [-]

Trust On First Use is the normal thing for these situations.

asmor 3 days ago | parent [-]

TOFU equates to "might as well never ask" for most users. Just like Windows UAC prompts.

superkuh 3 days ago | parent [-]

You're right most of the time. But there are two webs. And it's only in the later (far more common) case that things like that matter.

There is the web as it always has been on http/1.1 that is a hyperlinked set of html documents hosted on a mishmash of random commercial and personal servers. Then there is modern http/2 http/3 CA TLS only web hosted as a service on some other website or cloud; mostly to do serious business and make money. The modern web's CA TLS-only ID scheme is required due to the complexity and risk of automatic javascript execution in browsers.

I wish we could have browsers that could support both use cases. But we can't because there's too much money and private information bouncing around now. Can't be whimsical, can't 'vibe code' the web ID system (ie, self signed not feasible in HTTP/3). It's all gotta be super serious. For everyone. And that means bringing in a lot of (well hidden by acme2 clients) complexity and overhead and centralization (everyone uses benevolent US based Lets Encrypt). This progressive lowering of the cert lifetimes is making the HTTP-only web even more fragile and hard to create lasting sites on. And that's sad.

TOFU works for the old web just great. It's completely incompatible with the modern web because major browsers will only ever compile their HTTP/* libs with flags that prevent TOFU and self-signed. You could host a http/1.1 self-signed and TOFU but everyone (except geeks) would be scared away or incapable of loading it.

So, TOFU works if you just want to do something like "gemini" protocol but instead of a new protocol just stick to original http and have a demographic of retro-enthusiasts and poor people. It's just about as accessible as gemni for most people (ie, not very) except for two differences. 1. Bots still love http/1.1 and don't care if it's plain text. 2. There's still a giant web of http/1.1 websites out there.

TheJoeMan 3 days ago | parent [-]

Not to mention the usage of web browsers for configuring non-internet devices! I mean such as managing a router from the LAN side built-in webserver, how many warnings you have to click through in Firefox nowadays. Hooking an iPhone to an IoT device, the iPhone hates that there's no "internet" and constantly tries to drop the WiFi.

steventhedev 4 days ago | parent | prev | next [-]

There is a security model where MITM is not viable - and separating that specific threat from that of passive eavesdropping is incredibly useful.

tptacek 4 days ago | parent [-]

MITM scenarios are more common on the 2025 Internet than passive attacks are.

steventhedev 4 days ago | parent | next [-]

MITM attacks are common, but noisy - BGP hijacks are literally public to the internet by their nature. I believe that insisting on coupling confidentiality to authenticity is counterproductive and prevents the development of more sophisticated security models and network design.

orev 4 days ago | parent [-]

You don’t need to BGP hijack to perform a MITM attack. An HTTPS proxy can be easily and transparently installed at the Internet gateway. Many ISPs were doing this with HTTP to inject their own ads, and only the move to HTTPS put an end to it.

steventhedev 4 days ago | parent [-]

Yes. MITM attacks do happen in reality. But by their nature they require active participation which for practical purposes means leaving some sort of trail. More importantly is that by decoupling confidentionality from authenticity, you can easily prevent eavesdropping attacks at scale.

Which for some threat models is sufficiently good.

tptacek 4 days ago | parent [-]

This thread is dignifying a debate that was decisively resolved over 15 years ago. MITM is a superset of the eavesdropper adversary and is the threat model TLS is designed to risk.

It's worth pointing out that MITM is also the dominant practical threat on the Internet: you're far more likely to face a MITM attacker, even from a state-sponsored adversary, than you are a fiber tap. Obviously, TLS deals with both adversaries. But altering the security affordances of TLS to get a configuration of the protocol that only deals with the fiber tap is pretty silly.

pyuser583 4 days ago | parent | next [-]

As someone who had to set up monitoring software for my kids, I can tell you MITM are very real.

It’s how I know what my kids are up to.

It’s possible because I installed a trusted cert in their browsers, and added it to the listening program in their router.

Identity really is security.

steventhedev 4 days ago | parent | prev [-]

TLS chose the threat model that includes MITM - there's no good reason that should ever change. All I'm arguing is that having a middle ground between http and https would prevent eavesdropping, and that investment elsewhere could have been used to mitigate the MITM attacks (to the benefit of all protocols, even those that don't offer confidentiality). Instead we got OpenSSL and the CA model with all it's warts.

More importantly - this debate gets raised in every single HN post related to TLS or CAs. Answering with a "my threat model is better than yours" or somehow that my threat model is incorrect is even more silly than offering a configuration of TLS without authenticity. Maybe if we had invested more effort in 801.x and IPSec then we would get those same guarantees that TLS offers, but for all traffic and for free everywhere with no need for CA shenanigans or shortening lifetimes. Maybe in that alternative world we would be arguing that nonrepudiation is a valuable property or not.

simiones 3 days ago | parent [-]

It is literally impossible to securely talk to a different party over an insecure channel unless you have a shared key beforehand or use a trusted third-party. And since the physical medium is always inherently insecure, you will always need to trust a third party like a CA to have secure communications over the internet. This is not a limitation of some protocol, it's a fundamental law of nature/mathematics (though maybe we could imagine some secure physical transport based on entanglement effects in some future world?).

So no, IPSec couldn't have fixed the MITM issue without requiring a CA or some equivalent.

YetAnotherNick 3 days ago | parent [-]

The key could be shared in DNS records or could even literally be in the domain name like Tor. Although each approach has its pros and cons.

tptacek 3 days ago | parent [-]

On this arm of the thread we're litigating whether authentication is needed at all, not all the different ways authentication can be provided. I'm sure there's another part of the thread somewhere else where people are litigating CAs vs Tor.

BobbyJo 4 days ago | parent | prev [-]

What does their commonality have to do with the use cases where they aren't viable?

jchw 4 days ago | parent | prev | next [-]

I mean, we do TOFU for SSH server keys* and nobody really seems to bat an eye at that. Today if you want "insecure but encrypted" on the web the main way to go is self-signed which is both more annoying and less secure than TOFU for the same kind of use case. Admittedly, this is a little less concerning of an issue thanks to ACME providers. (But still annoying, especially for local development and intranet.)

*I mistakenly wrote "certificate" here initially. Sorry.

tptacek 4 days ago | parent | next [-]

SSH TOFU is also deeply problematic, which is why cattle fleet operators tend to use certificates and not piecewise SSH keys.

jchw 4 days ago | parent [-]

I've made some critical mistakes in my argument here. I am definitely not referring to using SSH TOFU in a fleet. I'm talking about using SSH TOFU with long-lived machines, like your own personal computers, or individual long-running servers.

Undoubtedly it is not best practice to lean on TOFU for good reason, but there are simply some lower stakes situations where engaging the CA system is a bit overkill. These are systems with few nodes (maybe just one) that have few users (maybe just one.) I have some services that I deploy that really only warrant a single node as HA is not a concern and they can easily run off a single box (modern cheap VPSes really don't sweat handling ~10-100 RPS of traffic.) For those, I pre-generate SSH server keys before deployment. I can easily verify the fingerprint in the excessively rare occasion it isn't already trusted. I am not a security expert, but I think this is sufficient at small scales.

To be clear, there are a lot of obvious security problems with this:

- It relies on me actually checking the fingerprint.

- SSH keys are valid and trusted indefinitely, so it has to be rotated manually.

- The bootstrap process inevitably involves the key being transmitted over the wire, which isn't as good as never having the key go over the wire, like you could do with CSRs.

This is clearly not good enough for a service that needs high assurance against attackers, but I honestly think it's largely fine for a small to medium web server that serves some small community. Spinning up a CA setup for that feels like overkill.

As for what I personally would do instead for a fleet of servers, personally I think I wouldn't use SSH at all. In professional environments it's been a long time since I've administered something that wasn't "cloud" and in most of those cloud environments SSH was simply not enabled or used, or if it was we were using an external authorization system that handled ephemeral keys itself.

That said, here I'm just suggesting that I think there is a gap between insecure HTTP and secure HTTPS that is currently filled by self-signed certificates. I'm not suggesting we should replace HTTPS usage today with TOFU, but I am suggesting I see the value in a middle road between HTTP and HTTPS where you get encryption without a strong proof of what you're connecting to. In practice this is sometimes the best you can really get anyway: consider the somewhat common use case of a home router configuration page. I personally see the value in still encrypting this connection even if there is no way to actually ensure it is secure. Same for some other small scale local networking and intranet use cases.

tptacek 4 days ago | parent [-]

I don't understand any of this. If you want TOFU for TLS, just use self-signed certificates. That makes sense for your own internal stuff. For good reason, the browser vendors aren't going to let you do it for public resources, but that doesn't matter for your use case.

jchw 4 days ago | parent [-]

Self-signed certificates have a terrible UX and worse security; browsers won't remember the trusted certificate so you'd have to verify it each time if you wanted to verify it.

In practice, this means that it's way easier to just use unencrypted HTTP, which is strictly worse in every way. I think that is suboptimal.

tptacek 4 days ago | parent [-]

Just add the self-signed certificate. It's literally a TOFU system.

jchw 4 days ago | parent | next [-]

But again, you then get (much) worse UX than plaintext HTTP, it won't even remember the certificate. The thing that makes TOFU work is that you at least only have to verify the certificate once. If you use a self-signed certificate, you have to allow it every session.

A self-signed certificate has the benefit of being treated as a secure origin, but that's it. Sometimes you don't even care about that and just want the encryption. That's pretty much where this argument all comes from.

tptacek 4 days ago | parent [-]

Yes, it will.

jchw 3 days ago | parent [-]

I checked and you seem to be correct, at least for Firefox and Chromium. I tried using:

https://self-signed.badssl.com/

and when I clicked "Accept the risk and continue", the certificate was added to Certificate Manager. I closed the browser, re-opened it, and it did not prompt again.

I did the same thing in Chromium and it also worked, though I'm not sure if Chromium's are permanent or if they have a lifespan of any kind.

I am absolutely 100% certain that it did not always work that way. I remember a time when Firefox had an option to permanently add an exception, but it was not the default.

Either way, apologies for the misunderstanding. I genuinely did not realize that it worked this way, and it runs contrary to my previous experience dealing with self-signed certificates.

To be honest, this mostly resolves the issues I've had with self-signed certificates for use cases where getting a valid certificate might be a pain. (I have instead been using ACME with DNS challenge for some cases, but I don't like broadcasting all of my internal domains to the CT log nor do I really want to manage a CA. In some cases it might be nice to not have a valid internet domain at all. So, this might just be a better alternative in some cases...)

tptacek 3 days ago | parent [-]

Every pentester that has ever used Burp (or, for the newcomers, mitmproxy) has solved this problem for themselves. My feeling is that this is not a new thing.

PhilipRoman 3 days ago | parent | prev [-]

Not a TLS expert, but last time I checked, the support for limiting what domains a certificate is allowed to sign was questionable. I wouldn't want my router to be able to MITM any https connection just to be able to connect to it's web interface securely.

arccy 4 days ago | parent | prev | next [-]

ssh server certificates should not be TOFU, the point of SSH certs is so you can trust the signing key.

TOFU on ssh server keys... it's still bad, but less people are interested in intercepting ssh vs tls.

tptacek 4 days ago | parent | next [-]

Intercepting and exploiting first-contact SSH sessions is a security conference sport. People definitely do it.

jchw 4 days ago | parent | prev [-]

I just typed the wrong thing, fullstop. I meant to say server keys; fixed now.

Also, I agree that TOFU in its own is certainly worse than having robust verification via the CA system. OTOH, SSH-style TOFU has some advantages over the CA system, too, at least without additional measures like HSTS and certificate pinning. If you are administering machines that you yourself set up, there is little reason to bother with anything more than TOFU because you'll cache the key shortly after the machine is set up and then get warned if a MITM is attempted. That, IMO, is the exact sort of argument in favor of having an "insecure but encrypted" sort of option for the web; small scale cases where you can just verify the key manually if you need to.

pabs3 4 days ago | parent | prev | next [-]

You don't have to TOFU SSH server keys, there is a DNSSEC option, or you can transfer the keys via a secure path, or you can sign the keys with a CA.

gruez 4 days ago | parent | prev | next [-]

>I mean, we do TOFU for SSH server certificates and nobody really seems to bat an eye at that.

Mostly because ssh isn't something most people (eg. your aunt) uses, and unlike with https certificates, you're not connecting to a bunch of random servers on a regular basis.

jchw 4 days ago | parent [-]

I'm not arguing for replacing existing uses of HTTPS here, just cases where you would today use self-signed certificates or plaintext.

hedora 4 days ago | parent | prev [-]

TOFU is not less secure than using a certificate authority.

Both defend against attackers the other cannot. In particular, the number of machines, companies and government agencies you have to trust in order to use a CA is much higher.

tptacek 4 days ago | parent [-]

TOFU is less secure than using a trust anchor.

hedora 4 days ago | parent [-]

That’s only true if you operate the trust anchor (possible) and it’s not an attack vector (impossible).

For example, TOFU where “first use” is a loopback ethernet cable between the two machines is stronger than a trust anchor.

Alternatively, you could manually verify + pin certs after first use.

tptacek 4 days ago | parent [-]

There are a couple of these concepts --- TOFU (key continuity) is one, PAKEs are another, pinning a third --- that sort of float around and captivate people because they seem easy to reason about, but are (with the exception of Magic Wormhole) not all that useful in the real world. It'd be interesting to flesh out the complete list of them.

The thing to think in comparing SSH to TLS is how frequent counterparty introductions are. New counterparties in SSH are relatively rare. Key continuity still needlessly exposes you to an grave attack in SSH, but really all cryptographic protocol attacks are rare compared to the simpler, more effective stuff like phishing, so it doesn't matter. New counterparties in TLS happen all the time; continuity doesn't make any sense there.

hedora 3 days ago | parent [-]

There are ~ 200 entries in my password manager. Maybe 25 are important. Pinning their certs would meaningfully reduce the transport layer attack surface for those accounts.

tptacek 3 days ago | parent [-]

Yes, these ideas bubble around because they all seem reasonable on their face. I was a major fan of pinning!

panki27 4 days ago | parent | prev | next [-]

How is an attacker going to MITM an encrypted connection they don't have the keys for, without having rogue DNS or something similar, i.e. faking the actual target?

Ajedi32 4 days ago | parent | next [-]

It's an unauthenticated encrypted connection, so there's no way for you to know whose keys you're using. The attacker can just tell you "Hi, I'm the server you're looking for. Here's my key." and your client will establish a nice secure, encrypted connection to the malicious attacker's computer. ;)

notTooFarGone 3 days ago | parent [-]

There are enough example where this is just a bogus scenario. There are a lot of IoT cases that fall apart anyway when the attacker is able to do a MITM attack.

For example if the MITM requires you to have physical access to the machine, you'd also have to cover the physical security first. As long as that is not the case who cares for some connection hijack. If the data you are actually communicating is in addition just not worth the encryption but has to be because of regulation you are just doing the dance without it being worth it.

oconnor663 4 days ago | parent | prev | next [-]

They MITM the key exchange step at the beginning, and now they do have the keys. The thing that prevents this in TLS is the chain of signatures asserting identity.

2mlWQbCK 3 days ago | parent | next [-]

You can have TLS with TOFU, like in the Gemini protocol. At least then, in theory, the MTIM has to happen the first time you connect to a site. There is also the possibility for out of band confirmation of some certificate's fingerprint if you want to be really sure that some Gemini server is the one you hope it is.

panki27 4 days ago | parent | prev [-]

You can not MITM a key that is being exchanged through Diffie-Hellman, or have I missed something big?

Ajedi32 4 days ago | parent | next [-]

Yes, Mallory just pretends to be Alice to Bob and pretends to be Bob to Alice, and they both establish an encrypted connection to Mallory using Diffie-Hellman keys derived from his secrets instead of each other's. Mallory has keys for both of their separate connections at this point and can do whatever he wants. That's why TLS only uses Diffie-Hellman for perfect forward secrecy after Alice has already authenticated Bob. Even if the authentication key gets cracked later Mallory can't reach back into the past and MITM the connection retroactively, so the DH-derived session key remains protected.

oconnor663 3 days ago | parent | prev [-]

If we know each other's DH public key in advance, then you're totally right, DH is secure over an untrusted network. But if we don't know each other's public keys, we have to get them over that same network, and DH can't protect us if the network lies about our public keys. Solving this requires some notion of "identity", i.e. some way to verify that when I say "my public key is abc123" it's actually me who's saying that. That's why it's hard to have privacy without identity.

simiones 4 days ago | parent | prev [-]

Connections never start as encrypted, they always start as plain text. There are multiple ways of impersonating an IP even if you don't control DNS, especially if you are in the same local network.

Gigachad 3 days ago | parent | next [-]

Double especially if it's the ISP or government involved. They can just automatically MITM and reencrypt every connection if there is no identity checks.

gruez 4 days ago | parent | prev | next [-]

>Connections never start as encrypted, they always start as plain text

Not "never", because of HSTS preload, and browsers slowly adding scary warnings to plaintext connections.

https://preview.redd.it/1l4h9e72vp981.jpg?width=640&crop=sma...

simiones 4 days ago | parent | next [-]

TCP SYN is not encrypted, and neither is Client Hello. Even with TCP cookies and TLS session resumption, the initial packet is still unencrypted, and can be intercepted.

haiku2077 4 days ago | parent [-]

Client Hello can be encrypted: https://support.mozilla.org/en-US/kb/understand-encrypted-cl...

simiones 3 days ago | parent | next [-]

Oh, right, thanks for the correction!

However, ECH relies on a trusted 3rd party to provide the key of the server you are intending to talk to. So, it won't work if you have no way of authenticating the server beforehand the way GP was thinking about.

EE84M3i 4 days ago | parent | prev [-]

Yes but this still depends on identity. It's not unauthenticated.

ekr____ 3 days ago | parent [-]

The situation is actually somewhat more complicated than this.

ECH gets the key from the DNS, and there's no real authentication for this data (DNSSEC is rare and is not checked by the browser). See S 10.2 [0] for why this is reasonable.

[0] https://tlswg.org/draft-ietf-tls-esni/draft-ietf-tls-esni.ht...

Ajedi32 4 days ago | parent | prev [-]

GP means unencrypted at the wire level. ClientHelloOuter is still unencrypted even with HSTS.

4 days ago | parent [-]
[deleted]
jiveturkey 4 days ago | parent | prev [-]

Chrome started doing https-first since April 2021 (v90).

Safari did some half measures starting in Safari 15 (don't know the year) and now fully defaults to https first.

Firefox 136 (2025) now does https first as well.

simiones 3 days ago | parent [-]

That is irrelevant. All TCP connections start as a TCP SYN, that can be trivially intercepted and MITMd by anyone. So, if you don't have an out-of-band reason to trust the server certificate (such as trust in the CA that PKI defines, or knowing the signature of the server certificate), you can never be sure your TLS session is secure, regardless of the level of encryption you're using.

gruturo 3 days ago | parent | next [-]

After the TCP handshake, the very first payload will be the HTTPS negotiation - and even if you don't use encrypted client hello / encrypted SNI, you still can't spoof it because the certificate chain of trust will not be intact - unless you somehow control the CAs trusted by the browser.

With an intact trust chain, there is NO scenario where a 3rd party can see or modify what the client requests and receives beyond seeing the hostname being requested (and not even that if using ECH/ESNI)

Your "if you don't have an out-of-band reason to trust the server cert" is a fitting description of the global PKI infrastructure, can you explain why you see that as a problem? Apart from the fact that our OSes and browser ship out of the box with a scary long list of trusted CAs, some from fairly dodgy places?

let's not forget that BEFORE that TCP handshake there's probably a DNS lookup where the FQDN of the request is leaked, if you don't have DoH.

jiveturkey 3 days ago | parent | prev [-]

well yes! that is the entire point / methodology of TLS. Because you have a trust anchor, you can be sure that at the app layer the connection is "secure".

of course the L3/L4 can be (non) trivially intercepted by anyone, but that is exactly what TLS protects you against.

if simple L4 interception were all that is required, enterprises wouldn't have to install a trust root on end devices, in order to MITM all TLS connections.

the comment you were replying to is

> How is an attacker going to MITM an encrypted connection they don't have the keys for

of course they can intercept the connection, but they can't MITM it in the sense that MITM means -- read the communications. the kind of "MITM" / interception that you are talking about is simply what routers do anyway!

IshKebab 3 days ago | parent | prev [-]

I disagree. Think about every time you use a service (website, email, etc.) you've used before via a network you don't trust (e.g. free WiFi).

On the other hand providing the option may give a false sense of security. I think the main reason SSH isn't MitM'd all over the place is it's a pretty niche service and very often you do have a separate authentication method by sending your public key over HTTPS.

saurik 3 days ago | parent | next [-]

When I use a service over TLS on a network I don't trust, the premise is that I only will trust the connection if it has a certificate from a handful of companies trusted by the people who wrote the software I'm using (my browser/client and/or my operating system) to only issue said certificates to people who are supposed to have them (which these days is increasingly defined to be "who are in control of the DNS for the domain name at a global level", for better or worse, not that everyone wants to admit that).

But like, no: the free Wi-Fi I'm using can't, in fact, MITM the encryption used by my connection... it CAN do a bunch of other shitty things to me that undermine not only my privacy but even undermine many of the things people expect to be covered by privacy (using traffic analysis on the size, timing, or destination of the packets that I'm sending), but the encryption itself isn't subject to the failure mode of SSH.

ongy 2 days ago | parent [-]

The encryption itself may not be.

Establishing the initial exchange of crypto key material can be.

That's where certificates are important because they add identity and prevent spoofing.

With TOFU, if the first use is on an insecure network, this exchange is jeopardized. And in this case, the encryption is not with the intended partner and thus does not need to be attacked.

woodruffw 3 days ago | parent | prev | next [-]

> I disagree. Think about every time you use a service (website, email, etc.) you've used before via a network you don't trust (e.g. free WiFi).

Hm? The reason I do use those services over a network I don't trust is because they're wrapped in authenticated, encrypted channels. The authenticated encryption happens at a layer above the network because I don't trust the network.

tikkabhuna 3 days ago | parent | prev [-]

But isn't that exactly the previous posters point? Free WiFI someone can just MITM your connection, you would never know and you think its encrypted. Its the worst possible outcome. At least when there's no encryption browsers can tell the user to be careful.

IshKebab 3 days ago | parent [-]

They could still tell the user to be careful without authentication.

He wasn't proposing that encryption without authentication gets the full padlock and green text treatment.