Remix.run Logo
1a527dd5 2 days ago

I don't understand the tone of aggression against ACME and their plethora of clients.

I know it isn't a skill issue because of who the author is. So I can only imagine it is some sort of personal opinion that they dislike ACME as a concept or the tooling around ACME in general.

We've been using LE for a while (since 2019 I think) for handful of sites, and the best nonsense client _for us_ was https://github.com/do-know/Crypt-LE/releases.

Then this year we've done another piece of work this time against the Sectigo ACME server and le64 wasn't quite good enough.

So we ended up trying:-

- https://github.com/certbot/certbot on GitHub Actions, it was fine but didn't quite like the locked down environment

- https://github.com/go-acme/lego huge binary, cli was interestingly designed and the maintainer was quite rude when raising an issue

- https://github.com/rmbolger/Posh-ACME our favourite, but we ended up going with certbot on GHA once we fixed the weird issues around permissions

Edit* Re-read it. The tone isn't aimed at the ACME or the clients. It's the spec itself. ACME idea good, ACME implementation bad.

lucideer 2 days ago | parent | next [-]

> I don't understand the tone of aggression against ACME and their plethora of clients.

> ACME idea good, ACME implementation bad.

Maybe I'm misreading but it sounds like you're on a similar page to the author.

As they said at the top of the article:

> Many of the existing clients are also scary code, and I was not about to run any of them on my machines. They haven't earned the right to run with privileges for my private keys and/or ability to frob the web server (as root!) with their careless ways.

This might seem harsh but when I think it's a pretty fair perspective to have when running security-sensitive processes.

thayne a day ago | parent | next [-]

No the author seems opposed to the idea specification of ACME, not just the implementation of the clients.

And a lot of the complaints ultimately boil down to not liking JWS. And I'm not really sure what she would have preferred there. ASN.1, which is even more complicated? Some bespoke format where implementations can't make use of existing libraries?

imtringued a day ago | parent [-]

This is exactly the impression I got here.

I would have had sympathy for the disdain for certbot, but certbot wasn't called out and that isn't what the blog post is about at all.

dwedge 2 days ago | parent | prev | next [-]

This is the same author that threw everyone into a panic about atop and turned out to not really have found anything.

ezekiel68 a day ago | parent [-]

Agreed and -- in particular -- I don't recall seeing any kind of "everybody get back into the pool" follow-up after the developers of atop quickly addressed the issue with an update. At least not any kind of follow-up that got the same kind of press as the initial alarm.

giancarlostoro 2 days ago | parent | prev | next [-]

Im not a container guru by any means (at least not yet?) but would docker not suffice these concerns?

fpoling 2 days ago | parent | next [-]

The issue is that the client needs to access the private key, tell web server where various temporary files are during the certificate generation (unless the client uses DNS mode) and tell the web server about a new certificate to reload.

To implement that many clients run as a root. Even if that root is in a docket container, this is needlessly elevated privileges especially given the complexity (again, needless) of many clients.

The sad part is that it is trivial to run most of the clients with an account with no privileges that can access very few files and use a unix socket to tell the web server to reload the certificate. But this is not done.

And then ideally at this point the web servers should if not implement then at least facilitate ACME protocol implementations, like, for example, redirect traffic requests from acme servers to another port with one-liner in config. But this is not the case.

ptx 2 days ago | parent | next [-]

Apache comes with built-in ACME support. Just enable the mod_md module: https://httpd.apache.org/docs/2.4/mod/mod_md.html

tialaramex 2 days ago | parent | prev | next [-]

But the requirements you listed aren't actually requirements of ACME, they're lazy choices you could make but they aren't necessary. Some clients do better.

For example the client needs a Certificate Signing Request, one way to achieve that is to either have the client choose the private keys or give it access to a chosen key, but the whole point of a CSR is that you don't need the private key, the CSR can be made by another system, including manually by a human and it can even be re-used repeatedly so that you don't need new ones until you decide to replace your keys.

Yes, if we look back at my hopes when Let's Encrypt launched we can be very disappointed that although this effort was a huge success almost all the server vendors continued to ship garbage designed for a long past era where HTTPS is a niche technology they barely support.

toast0 a day ago | parent [-]

I don't know that it's accurate, but at the beginning, it felt like using certbot was the only supported way to use ACME/LE, and it really wanted to do stuff as root and restart your webserver whenever.

Or you could run Caddy which had a built in ACME client, but then you're running an extra daemon.

apache_mod_md eventually came along which works for me, but it's also got some lazy things (it mostly just manages requesting certs, you've got to have a frequent enough reload to pick them up; I guess that's ok because I don't think public Apache ever learned to periodically check if it needs to reopen access logs when they're rotated, so you probably reload Apache from time to time anyway)

Before that was workable, I did need some certs and used acme.sh by hand, and it was nicer than trusting a big thing running in a cron and restarting things, but it was also inconvenient becsause I had to remember to go do it.

tialaramex a day ago | parent [-]

> I don't know that it's accurate, but at the beginning, it felt like using certbot was the only supported way to use ACME/LE, and it really wanted to do stuff as root and restart your webserver whenever.

It's fair to say that on day one the only launch client was Certbot, although on that day it wasn't called "Certbot" yet so if that's the name you remember it wasn't the only one. Given that it's not guaranteed this will be a success (like the American Revolution, or the Harry Potter books it seems obvious in hindsight but that's too late) it's understandable that they didn't spend lots of money developing a variety of clients and libraries you might want.

GoblinSlayer 2 days ago | parent | prev [-]

It's cheap. If the client was done today, it would be based on AI.

rsync 2 days ago | parent | prev | next [-]

Yes, it does.

I run acme in a non privileged jail whose file system I can access from outside the jail.

So acme sees and accesses nothing and I can pluck results out with Unix primitives from the outside.

Yes, I use dns mode. Yes, my dns server is also a (different) jail.

TheNewsIsHere 2 days ago | parent | prev | next [-]

My reading of the article suggested to me that the author took exception to the code that touched the keying material. Docker is immaterial to that problem. I won’t deign to speak for Rachel By The Bay (mother didn’t raise a fool, after all), but I expect Docker would be met with a similar regard.

Which I do understand. Although I use Docker, I mainly use it personally for things I don’t want to spend much time on. I don’t really like it over other alternatives, but it makes standing up a lab service stupidly easy.

lucideer 2 days ago | parent | prev | next [-]

I use docker for the same reasons as the author's reservations - I combine a docker exec with some of my own loose automation around moving & chmod-ing files & directories to obviate the need for the acme client to have unfettered root access to my system.

Whether it's a local binary or a dockerised one, that access still needs to be marshalled either way & it can get complex facilitating that with a docker container. I haven't found it too bad but I'd really rather not need docker for on-demand automations.

I give plenty* of services root access to my system, most of which I haven't written myself & I certainly haven't audited their code line-by-line, but I agree with the author that you do get a sense from experience of the overall hygiene of a project & an ACME client has yet to give me good vibes.

* within reason

paul_h a day ago | parent [-]

Copilot suggests:

    docker run --rm \
      -v /srv/mywebsite/certs:/acme.sh/certs \
      -v /srv/mywebsite/public/.well-known/acme- 
      challenge:/acme-challenge \
      neilpang/acme.sh --issue \
      --webroot /acme-challenge \
      -d yourdomain.com \
      --cert-file /acme.sh/certs/cert.pem \
      --key-file /acme.sh/certs/key.pem \
      --fullchain-file /acme.sh/certs/fullchain.pem
I don't know why it's suggesting `neilpang` though, as he no longer has a fork.
lucideer a day ago | parent [-]

Yeah I'm not running anything llms spit at me in a security-sensitive context.

That example is not so bad - you've already pointed out the main obvious supply-chain attack vector in referencing a random ephemeral fork, but otherwise it's certonly (presumably neil's default) so it's the simplest case. Many clients have more... intrusive defaults that prioritise first-run cert onboarding which is opening more surface area for write error.

2 days ago | parent | prev [-]
[deleted]
dangus 2 days ago | parent | prev [-]

I disagree, the author is overcomplicating and overthinking things.

She doesn't "trust" tooling that basically the entire Internet including major security-conscious organizations are using, essentially letting perfect get in the way of good.

I think if she were a less capable engineer she would just set that shit up using the easiest way possible and forget about it like everyone else, and nothing bad would happen. Download nginx proxy manager, click click click, boom I have a wilcard cert, who cares?

I mean, this is her https site, which seems to just be a blog? What type of risk is she mitigating here?

Essentially the author is so skilled that she's letting perfect get in the way of good.

I haven't thought about certificates for years because it's not worth my time. I don't really care about the tooling, it's not my problem, and it's never caused a security issue. Put your shit behind a load balancer and you don't even need to run any ACME software on your own server.

nothrabannosir a day ago | parent [-]

Sometimes I wonder how y’all became programmers. I learned basically everything by SRE-larping on my shitty nobody-cares-home-server for years and suddenly got paid to do it for real.

Who do you think they hire to manage those LBs for you? People who never ran any ACME software, or people who have a blog post turning over every byte of JSON in the protocol in excruciating detail?

dangus 11 hours ago | parent [-]

Our backgrounds sound similar. I just don’t sweat all those details when I set things up.

I’m not advocating for the use of cloud services necessarily, not saying we all need to allow someone else to abstract away everything. And I realize that someone on an ops team has to actually set that up at a low level at some point.

What I am saying is that there’s a lot of open source software that has already invented the wheel for you. You can run it easily and be reasonably assured that it’s safe enough to be exposed to the internet.

I gave the example of nginx proxy manager. It may be basic software but for a personal blog it’ll get the job done and you can set it up almost entirely in a GUI following a simple YouTube tutorial. It’ll get you an wildcard certificate automatically, and it’ll be secure enough.

diggan 2 days ago | parent | prev | next [-]

> I don't understand the tone of aggression against ACME and their plethora of clients.

The older posts on the same website provided a bit more context for me to understand today's post better:

- "Why I still have an old-school cert on my https site" - January 3, 2023 - https://rachelbythebay.com/w/2023/01/03/ssl/

- "Another look at the steps for issuing a cert" - January 4, 2023 - https://rachelbythebay.com/w/2023/01/04/cert/

immibis 2 days ago | parent | prev [-]

Some people don't want to be forced to run a bunch of stuff they don't understand on the server, and I agree with them.

Sadly, security is a cat and mouse game, which means it's always evolving and you're forced to keep up - and it's inherent by the nature of the field, so we can't really blame anyone (unlike, say, being forced to integrate with the latest Google services to be allowed on the Play Store). At least you get to write your own ACME client if you want to. You don't have to use certbot, and there's no TPM-like behaviour locking you out of your own stuff.

g-b-r 2 days ago | parent | next [-]

> Some people don't want to be forced to run a bunch of stuff they don't understand on the server

It's not just about not understanding, it's that more complex stuff is inherently more prone to security vulnerabilities, however well you think you reviewed its code.

Avamander 2 days ago | parent [-]

> It's that more complex stuff is inherently more prone to security vulnerabilities

That's overly simplifying it and ignores the part where the simple stuff is not secure to begin with.

In the current context you could take a HTTP client with a formally verified TLS stack, would you really say it's inherently more vulnerable than a barebones HTTP client talking to a server over an unencrypted connection? I'd say there's a lot more exposed in that barebones client.

g-b-r 2 days ago | parent [-]

The alternative of the article was ACME vs other ways of getting TLS certificates, not https vs http.

Of course plain http would be, generally, much more dangerous than a however complex encrypted connection

tptacek 2 days ago | parent | prev | next [-]

Non-ACME certs are basically over. The writing has been on the wall for a long time. I understand people being squeamish about it; we fear change. But I think it's a hopeful thing: the Web PKI is evolving. This is what that looks like: you can't evolve and retain everyone's prior workflows, and that has been a pathology across basically all Internet security standards work for decades.

ipdashc 2 days ago | parent [-]

ACME is cool (compared to what came before it), but I'm kind of sad that EV certs never seemed to pan out at all. I feel like they're a neat concept, and had the potential to mitigate a lot of scams or phishing websites in an ideal world. (That said, discriminating between "big companies" and "everyone else who can't afford it" would definitely have some obvious downsides.) Does anyone know why they never took off?

johannes1234321 2 days ago | parent | next [-]

> Does anyone know why they never took off?

Browser vendors at some point claimed it confused users and removed the highlight (I think the same browser vendors who try to remove the "confusing" URL bar ...)

Aside from that EV certificates are slow to issue and phishers got similar enough EV certs making the whole thing moot.

amiga386 a day ago | parent | prev | next [-]

Because they actively thwarted security.

https://arstechnica.com/information-technology/2017/12/nope-...

https://web.archive.org/web/20191220215533/https://stripe.ia...

> this site uses an EV certificate for "Stripe, Inc", that was legitimately issued by Comodo. However, when you hear "Stripe, Inc", you are probably thinking of the payment processor incorporated in Delaware. Here, though, you are talking to the "Stripe, Inc" incorporated in Kentucky.

There's a lot of validation that's difficult to get around in DV (Domain Validation) and in DNS generally. Unless you go to every legal jurisdiction in the world and open businesses and file and have granted trademarks, you _cannot_ guarantee that no other person will have the same visible EV identity as you.

It's up to visitors to know that apple.com is where you buy Apple stuff, while apple.net, applecart.com, 4ppl3.com, аррlе.сом, ., example.com/https://apple.com are not. But if they can manage that, they can trust apple.com more than they could any URL with an "Apple, Inc." EV certificate. When browsers show the URL bar tend to highlight the top-level domain prominently, and they reject DNS names with mixed script, to avoid scammers fooling you. It's working better than EV.

tialaramex 2 days ago | parent | prev | next [-]

EV can't actually work. It was always about branding for the for-profit CAs so that they have a premium product which helps the line go up. Let me give you a brief history - you did ask.

In about 2005, the browser vendors and the Certificate Authorities began meeting to see if they could reach some agreement as neither had what they wanted and both might benefit from changes. This is the creation of the CA/Browser Forum aka CA/B Forum which still exists today.

From the dawn of SSL the CAs had been on a race to the bottom on quality and price.

Initially maybe somebody from a huge global audit firm that owns a CA turn up on a plane and talks to your VP of New Technology about this exciting new "World Wide Web" product, maybe somebody signs a $1M deal over ten years, and they issue a certificate for "Huge Corporation, Inc" with all the HQ address details, etc. and oh yeah, "www.hugecorp.example" should be on there because of that whole web thing, whatever that's about. Nerd stuff!

By 2005 your web designer clicks a web page owned by some bozo in a country you've never heard of, types in the company credit card details, company gets charged $15 because it's "on sale" for a 3 year cert for www.mycompany.example and mail.mycompany.com is thrown in for free, so that's nice. Is it secure? Maybe? I dunno, I think it checked my email address? Whatever. The "real world address" field in this certificate now says "Not verified / Not verified / None" which is weird, but maybe that's normal?

The CAs can see that if this keeps up in another decade they'll be charging $1 each for 10 year certificates, they need a better product and the browser vendors can make that happen.

On the other hand the browser vendors have noticed that whereas auditors arriving by aeroplane was a bit much, "Our software checked their email address matched in the From line" is kinda crap as an "assurance" of "identity".

So, the CA/B Baseline Requirements aka BRs are one result. Every CA agreed they'd do at least what the "baseline" required and in practice that's basically all they do because it'd cost extra to do more so why bother. The BRs started out pretty modest - but it's amazing what you find people were doing when you begin writing down the basics of what obviously they shouldn't do.

For example, how about "No issuing certificates for names which don't exist" ? Sounds easy enough right? When "something.example" first comes into existence there shouldn't already be certificates for "something.example" because it didn't exist... right? Oops, lots of CAs had been issuing those certificates, reasoning that it's probably fine and hey, free money.

Gradually the BRs got stricter, improving the quality of this baseline product in both terms of the technology and the business processes, this has been an enormous boon, because it's an agreement for the industry this ratchets things for everybody, so there's no race to the bottom on quality because your competitors aren't allowed to do worse than the baseline. On price, the same can't be said, zero cost certificates are what Let's Encrypt is most famous for after all.

The other side of the deal is what the CAs wanted, they wanted UI for their new premium product. That's EV. Unlike many of the baseline requirements, this is very product focused (although to avoid being an illegal cartel it is forbidden for CA/B Forum to discuss products, pricing etc.) and so it doesn't make much technical sense.

The EV documents basically say you get all of the Baseline, plus we're going to check the name of your business - here's how we'll check - and then the web browser is going to display that name. It's implied that these extra checks cost more money (they do, and so this product is much more expensive). So improvements to that baseline do help still, but they also help everybody who didn't buy the premium EV product.

Now, why doesn't this work in practice? The DNS name or IP address in an "ordinary" certificate can be compared to reality automatically by the web browser. This site says it is news.ycombinator.com, it has a certificate for news.ycombinator.com, that's the same OK. Your browser performs this check, automatically and seamlessly, for every single HTTP transaction. Here on HN that's per page load, but on many sites you're doing transactions as you click UI or scroll the page, each is checked.

With EV the checks must be done by a human, is this site really "Bob's Burgers" ? Actually wait, is it really "Bob's Burgers of Ohio, US" ? Worse, probably although you know them as Bob's Burgers, legally, as you'd see on their papers of incorporation they are "Smith Restaurant Holdings Inc." and they're registered in Delaware because of course they are.

So now you're staring at the formal company name of a busines and trying to guess whether that's legitimate or a scam. But remember you can't just do this check once, scammers might find a way to redirect some traffic and so you need to check every individual transaction like your web browser does. Of course it's a tireless machine and you are not.

So in practice this isn't effective.

ipdashc 20 hours ago | parent | next [-]

This was a fun read. Thanks for the explanation!

immibis a day ago | parent | prev [-]

Still sounds better than nothing. And gives companies an incentive to register under their actual names.

tialaramex a day ago | parent [-]

I'm not convinced on either, the mindless automation is always effective so you just don't need to think about it, whereas for EV you need to intimately understand exactly which transactions you verified and what that means - the login HTML was authentic but you didn't check the Javascript? The entire login page was checked but HTTP POST of your password was not? The redirect to payment.mybank.example wasn't checked? Only the images were checked?

Imagine explaining to my mother how to properly check this, then imagine explaining why the check she just made is wrong now because the bank changed how their login procedure works.

We could have attempted something with better security, although nowhere close to fool proof, but the CAs were focused on a profitable product not on improving security, and I do not expect anyone to have another bite of that cherry.

As to the incentive to register, this is a cart v horse problem. Most businesses do not begin with a single unwavering vision of their eventual product and branding, they iterate, and that means the famous branding will need an expensive corporate change just to make the EV line up properly, that's just not going to happen much of the time, so people get used to seeing the "wrong" name and once that happens this is worthless.

Meanwhile crooks can spend a few bucks to register a similar-sounding name and registration authorities don't care, while the machine sees at a glance the differences between bobs-burgers.example and robs-burgers.example and bobsburgers.example, the analogous business registrations look similar enough that humans would click right past.

bandrami a day ago | parent | prev [-]

Phishers also got EV certs.

The big problem with PKI is that there are known bad (or at least sketchy) actors on the big CA lists that realistically can't be taken off that list.

solatic a day ago | parent | next [-]

How big of a problem is it really, with CAA records and FIDO2 or passkeys?

CAA makes sure only one CA signs the cert for the real domain. FIDO2 prevents phising on a similar-looking domain. EV would force a phisher to get a similar-looking corporate name, but it's beside the main FIDO2 protection.

akerl_ a day ago | parent | prev [-]

What's an example?

We're in an era where browsers have forced certificate transparency and removed major vendor CAs when they've issued certificates in violation of the browsers' requirements.

The concern about bad/sketchy CAs in the list feels dated.

bandrami a day ago | parent [-]

Look at the list of state actors in your certificates bundle, for a start

spockz 2 days ago | parent | prev | next [-]

Given that keys probably need to be shared between multiple gateway/ingresses, how common is it to just use some HSM or another mechanism of exchanging the keys with all the instances? The acme client doesn’t have to run on the servers itself.

tialaramex 2 days ago | parent | next [-]

> The acme client doesn’t have to run on the servers itself.

This is really important to understand if you care about either: Actually engineering security at some scale or knowing what's actually going on in order to model it properly in your head.

If you just want to make a web site so you can put up a blog about your new kitten, any of the tools is fine, you don't care, click click click, done.

For somebody like Rachel or many HN readers, knowing enough of the technology to understand that the ACME client needn't run on your web servers is crucial. It also means you know that when some particular client you're evaluating needs to run on the web server that it's a limitation of that client not of the protocol - birds can't all fly, but flying is totally one of the options for birds, we should try an eagle not an emu if we want flying.

immibis a day ago | parent | prev [-]

You could if your domain was that valuable. Most aren't.

throw0101b 2 days ago | parent | prev | next [-]

> Some people don't want to be forced to run a bunch of stuff they don't understand on the server, and I agree with them.

There are a number of shell-based ACME clients whose prerequisites are: OpenSSL and cURL. You're probably already relying on OpenSSL and cURL for a bunch of things already.

If you can read shell code you can step through the logic and understand what they're doing. Some of them (e.g., acme.sh) often run as a service user (e.g., default install from FreeBSD ports) so the code runs unprivileged: just add a sudo (or doas) config to allow it to restart Apache/nginx.

hannob 2 days ago | parent | prev [-]

> Some people don't want to be forced to run a bunch of stuff they > don't understand on the server, and I agree with them.

Honest question:

* Do you understand OS syscalls in detail?

* Do you understand how your BIOS initializes your hardware?

* Do you understand how modern filesystems work?

* Do you understand the finer details of HTTP or TCP?

Because... I don't. But I know enough about them that I'm quite convinced each of them is a lot more difficult to understand than ACME. And all of them and a lot more stuff are required if you want to run a web server.

sussmannbaka 2 days ago | parent | next [-]

This point is so tired. I don’t understand how a thought forms in my neurons, eventually matures into a decision and how the wires in my head translate this into electrical pulses to my finger muscles to type this post so I guess I can’t have opinions about complexity.

snowwrestler 2 days ago | parent | prev | next [-]

I get where you’re going with this, but in this particular case it might not be relevant because there’s a decent chance that Rachel By The Bay does actually understand all those things.

frogsRnice 2 days ago | parent | prev | next [-]

Sure - but people are still free to decide where they draw the line.

Each extra bit of software is an additional attack surface after all

fc417fc802 2 days ago | parent | prev | next [-]

An OS is (at least generally) a prerequisite. If minimalism is your goal then you'd want to eliminate tangentially related things that aren't part of the underlying requirements.

If you're a fan of left-pad I won't judge but don't expect me to partake without bitter complaints.

kjs3 2 days ago | parent | prev [-]

I hear some variation of this line of 'reasoning' about once a week, and it's always followed by some variation of "...and that's why we shouldn't have to do all this security stuff you want us to do".