| ▲ | muppetman a day ago |
| This idea we seem to have moved towards where every applications ALSO includes their own ACME support really annoys me actually. I much prefer the idea that there's well written clients who's job it is to do the ACME handling.
Is my Postfix mailserver soon going to have an ACME shoehorned in? I've already seen GitHub issues for AdGuardHome (a DNS server that supports blocklists) to have an ACME client built in, thankfully thus far ignored.
Proxmox (a VM Hypervisor!) has an ACME Client built in. I realise of course the inclusion of an ACME client in a product doesn't mean I need to use their implementation, I'm free to keep using my own independant client. But it seems to me adding ACME clients to everything is going to cause those projects more PRs, more baggage to drag forward etc. And confusion for users as now there's multiple places they could/should be generating certificates. Anyway, grumpy old man rant over. It just seems Zawinski's Law "Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can." can be replaced these days with MuppetMan's law of "Every program attempts to expand until it can issue ACME certificates." |
|
| ▲ | nottorp a day ago | parent | next [-] |
| That's okay, next step is to fold both nginx and the acme client into systemd. |
| |
| ▲ | devttyeu a day ago | parent | next [-] | | Careful posting systemd satire here, there is a high likelihood that your comment becomes the reason this feature gets built and PRed by someone bored enough to also read HN comment section. | | |
| ▲ | devttyeu a day ago | parent [-] | | [Unit]
Description=Whatever
[Service]
ExecStart=/usr/local/bin/cantDoHttpSvc -bind 0.0.0.0:1234
[HTTP]
Domain=https://whatever.net
Endpoint=127.1:1234
Yeah this could happen one day | | |
| ▲ | 9dev a day ago | parent | next [-] | | You know, Tailscale serve basically does this right now, but if I could skip this step and let systemd expose a local socket via HTTPS, automatically attempting to request a certificate for the hostname, with optional configuration in the socket unit file… I would kinda like that actually | | | |
| ▲ | pta2002 a day ago | parent | prev | next [-] | | You can basically implement this right now already by using a systemd generator. It’s not even a particularly bad idea, kinda want to try doing it to hook it up to nginx or something, would make adding a reverse proxy route as simple as adding a unit file, and you could depend on it from other units. | |
| ▲ | akagusu a day ago | parent | prev [-] | | I'm sure this will become a dependency of GNOME |
|
| |
| ▲ | arianvanp a day ago | parent | prev | next [-] | | Okay but hear me out If we teach systemd socket activation to do TLS handshakes we can completely offload TLS encryption to the kernel (and network devices) and you get all of this for free. It's actually not a crazy idea in the world of kTLS to centralize TLS handshaking into systems | | |
| ▲ | johannes1234321 a day ago | parent [-] | | Oh, I remember my Solaris fanboys praising Kernel-Level TLS as it reduced context switching by a lot. I believe they even had a patched openssl making this transparent to openssl based applications. Linux seems to offer such facilities, too. I never use it to my knowledge, though (might be that some app used it in background?)
https://lwn.net/Articles/892216/ | | |
| ▲ | reactordev a day ago | parent [-] | | Why stop there? Why not sign and verify off the mother of all root CA’s, your TPM 2.0 Module EEPROM? (fun to walk down through the trees and the silicon desert of despair, to the land of the ROM, where things can never change) |
|
| |
| ▲ | throw_a_grenade a day ago | parent | prev [-] | | Unironicaly, I think having systemd-something util that would provide TLS certs for .services upon encountering specific config knob in [Service] section would be much better that having multitude uncoordinated ACME clients that will quickly burn through allowed rate limits. Even just as a courtesy to LE/ISRG's computational resources. | | |
| ▲ | jcgl a day ago | parent | next [-] | | It wouldn't specifically have to be a systemd project or anything; you could make a systemd generator[0] so that you could list out certs as units in the Requires= of a unit. That'd be really neat, actually. [0] https://www.freedesktop.org/software/systemd/man/latest/syst... | | |
| ▲ | throw_a_grenade a day ago | parent [-] | | I found this: https://github.com/woju/systemd-dehydrated/ It essentially creates per-domain units. However, those are timers, not services, because the underlying tool doesn't have long-running daemon, it's designed to run off cron. So I can't depend on them directly, and I also need to add multitude of dropins that will restart or reload services that use certificates (https://github.com/woju/systemd-dehydrated/blob/master/contr...). Coudn't figure out any way that would automate this better. | | |
| ▲ | jcgl 11 hours ago | parent [-] | | Well, every timer needs a service to activate. And at a cursory glance, this project has oneshot services, which is what I would expect for something like this. So your units (e.g. a webserver) would take After= and Wants=/Requires= on the given oneshot services. This project looks neat! I might give it a try. I had never heard of dehydrated, but I don't particularly love certbot, and would certainly be willing to try. |
|
| |
| ▲ | 0x69420 a day ago | parent | prev | next [-] | | multiple services depending on different outputs of a single acme client can be expressed, right now, in 2025, within systemd unit definitions, without deeply integrating a systemd-certd-or-whatever-as-such. which is basically ideal, no? for all the buy-in that the systemd stapling-svchost.exe-onto-cgroups approach asks of us, at the very least we have sufficiently expressive system to do that sort of thing. where something on the machine has a notion of what wants what from what, and you can issue a command to see whether that dependency is satisfied. like. we are there. good. nice. hopefully ops guys are content to let sleeping dogs lie, right? ...right? | |
| ▲ | Spivak a day ago | parent | prev [-] | | A systemd-certd would actually kinda slap. One cert store to rule them all for clients, a way to define certs and specify where they're supposed to be placed with automatic reload using the systemd dependency solver, a way to mount certs into services privately, a unified interface for interacting with the cert store. | | |
| ▲ | nottorp a day ago | parent [-] | | So ... not only would your system take ages to boot without the internets(tm) because that's how systemd works, it will be extended in the same spirit to not boot at all if letsencrypt is down. Sounds enterprise. Also, you people forgot that my proposal is to also fold the http server in, and ideally all the scripting languages and all of npm just in case. | | |
| ▲ | Spivak 10 hours ago | parent | next [-] | | Well I mean if you configured your system in a manner that requires one of the wait-online services that's kinda on you. It's not required for anything by default. It would be the same for certd. If you configure your system to hold up booting waiting for a cert then that's your choice but there's plenty of ways to have it not. | |
| ▲ | throw_a_grenade a day ago | parent | prev [-] | | ExecStart=/usr/bin/python3 -m http.server
WorkingDirectory=/srv/www
?
|
|
|
|
|
|
| ▲ | EvanAnderson a day ago | parent | prev | next [-] |
| I'm with you on this. I run my ACME clients as least-privileged standalone applications. On a machine where you're only running a webserver I suppose having Nginx do it the ACME renewal makes sense. On many of the machines I support I also need certificates for other services, too. In many cases I also have to distribute the certificate to multiple machines. I find it easy to manage and troubleshoot a single application handling the ACME process. I can't imagine having multiple logs to review and monitor would be easier. |
|
| ▲ | oliwarner a day ago | parent | prev | next [-] |
| The idea that the thing that needs the certificate, gets the certificate doesn't seem that perverse to me. The interface/port-bound httpd needs to known what domains it's serving, what certificates it's using. Automating this is pure benefit to those that want it, and a non-issue to those who don't — just don't use it. |
|
| ▲ | atomicnumber3 a day ago | parent | prev | next [-] |
| I personally think nginx is the kind of project I'd allow to have its own acme client. It's extremely extremely widely used software and I would be surprised if less than 50% of the certs LE issues are not exclusively served via nginx. Now if Jenkins adds acme support then yes I'll say maybe that one is too far. |
| |
| ▲ | muppetman a day ago | parent | next [-] | | But it's a webserver. I'm sure it farms out sending emails from forms it serves, I doubt it has a PHP library built in, surely it farms that out to php-fpm? It doesn't have a REDIS library or NodeJS built in. Why's ACME different? | | |
| ▲ | tuckerman a day ago | parent | next [-] | | I get what you are saying but surely obtaining a certificate is much closer to being considered a core part of a web server related to transport, especially in 2025 when browsers throw up "doesn’t support a secure connection with HTTPS" messages left and right, than those other examples. I think there is also clearly demand: caddy is very well liked and often recommended for hobbyists and I think a huge part of that is the built in certificate management. | |
| ▲ | andmarios a day ago | parent | prev | next [-] | | Nginx (and Apache, etc) is not just a web server; it is also a reverse proxy, a TLS termination proxy, a load balancer, etc. The key service here is "TLS termination proxy", so being able to issue certificates automatically was pretty high on the wish list. | |
| ▲ | banashark a day ago | parent | prev | next [-] | | Well you say that.... https://openresty.org/en/ "Real-world applications of OpenResty® range from dynamic web portals and web gateways, web application firewalls, web service platforms for mobile apps/advertising/distributed storage/data analytics, to full-fledged dynamic web applications and web sites. The hardware used to run OpenResty® also ranges from very big metals to embedded devices with very limited resources. It is not uncommon for our production users to serve billions of requests daily for millions of active users with just a handful of machines." | |
| ▲ | dividuum a day ago | parent | prev | next [-] | | Well, it already has, among a ton of other modules, a memcached and a JavaScript module (njs), so you’re actually not that far off. An optional ACME module sounds fitting. | |
| ▲ | firesteelrain a day ago | parent | prev [-] | | To your point, we use Venafi and it has clients that act as orchestrators to deploy the new cert and restart the web service. Webservice itself doesn’t need to be ACME aware. Venafi supports ACME protocol so it can be the ACME server like Let’s Encrypt I am speaking purely on prem non internet connect scenario |
| |
| ▲ | chrisweekly a day ago | parent | prev [-] | | "surprised if less than 50% of the certs LE issues are not..." triple-negative, too hard to parse |
|
|
| ▲ | mholt a day ago | parent | prev | next [-] |
| Integrated ACME clients have proven to be more robust, more resilient, more automatic, and easier to use than exposing multiple moving parts: https://github.com/https-dev/docs/blob/master/acme-ops.md#ce... To avoid a splintered/disjoint ecosystem, library code can be reused across many applications. |
|
| ▲ | Ajedi32 a day ago | parent | prev | next [-] |
| It makes sense to me. If an application needs a signed certificate to function properly, why shouldn't it include code to obtain that certificate automatically when possible? Maybe if there were OS level features for doing the same thing you could argue the applications should call out to those instead, but at least on Linux that's not really the case. Why should admins need to install and configure a separate application just to get basic functionality working? |
|
| ▲ | jdboyd a day ago | parent | prev | next [-] |
| Proxmox is not a hypervisor. It is a Linux distribution. As such it has a web server, kvm, zfs, and many other pieces. Maybe the acme client is built in to the web server. Maybe the acme client is built into their custom management software. Maybe they're just scripting around certbot. I do tend to find that I need multiple services with tls on the same machine, such as a web server and RabbitMQ, or postfix and dovecot. I don't know how having every program have its own acme client would end up working out. That seems like it could be a mess. On the other hand, I have been having trouble getting them all to take updated certificates correctly without me manually restarting services after cert bots cron job does an update. |
|
| ▲ | 9dev a day ago | parent | prev | next [-] |
| I’m of the opposite opinion, really: Automatic TLS certificate requests are just an implementation detail of software able to advertise as accepting encrypted connections. Similarly many applications include an OAuth client that automatically takes care of requesting access tokens and refreshing them automatically, all using a discovery URI and client credentials. Lots of apps should support this automatically, with no intervention necessary, and just communicate securely with each other. And ACME is the way to enable that. |
| |
| ▲ | imiric a day ago | parent [-] | | Why should every software need to support encrypted connections? That is a rabbit hole of complexity which can easily be implemented incorrectly, and is a security risk of its own. Instead, it would make more sense for TLS to be handled centrally by a known and trusted implementation, which proxies the communication with each backend. This is a common architecture we've used for decades. It's flexible, more secure, keeps complexity compartmentalized, and is much easier to manage. | | |
| ▲ | tuckerman a day ago | parent [-] | | Isn't nginx one of the de facto choices (alongside HAProxy) for such a proxy and therefore it makes sense to include an ACME client? (This might be what you already had in mind but given the top level comment of the thread we are in I wasn't sure) | | |
| ▲ | imiric a day ago | parent [-] | | Yeah, I'm fine with web servers like nginx supporting TLS, ACME, or whatever protocol is required for encryption, since they can be used as proxies. I understood GP to have the opinion that most apps should have this support built-in, which is what I'm arguing against. |
|
|
|
|
| ▲ | dizhn a day ago | parent | prev | next [-] |
| I believe caddy was the first standalone software to include automated acme. It's a web server (and a proxy) so it's a very good fit. One software many domains. Proxmox likewise is a hypervisor hosting many VMs (hence domains). Another good fit. Though as far as I know they don't provide the service for the VMs "yet". |
|
| ▲ | renewiltord a day ago | parent | prev [-] |
| You just don't load the module and use certbot and that will work which is what I'm doing. People get carried away with this stuff. The software is quite modular. It's fine for people to simplify it. For a bunch of tech-aware people the inability for you all here to modify your software to meet your needs is insane. As a 14 year old I was using the ck patch series to have a better (for me) scheduler in the kernel. Every other teenager could do this shit. In my 30s I have a low friction set up where each bit of software only does one thing and it's easy for me to replicate. Teenagers can do this too. Somehow you guys can't do either of these things. I don't get it. Are you stupid? Just don't load the module. Use stunnel. Use certbot. None of these things are disappearing. I much prefer. I much prefer. I much prefer. Christ. Never seen a userbase that moans as much about software (I moan about moaning - different thing) while being unable to do anything about it as HN. |
| |
| ▲ | mikestorrent a day ago | parent [-] | | The unix philosophy is still alive... and by that I mean complaining on newsgroups about things, not "do one thing well" |
|