Remix.run Logo
Shank 3 days ago

> The current preview implementation supports HTTP-01 challenges to verify the client’s domain ownership.

DNS-01 is probably the most impactful for users of nginx that isn't public facing (i.e., via Nginx Proxy Manager). I really want to see DNS-01 land! I've always felt that it's also one of the cleanest because it's just updating some records and doesn't need to be directly tethered to what you're hosting.

kijin 2 days ago | parent | next [-]

A practical problem with DNS-01 is that every DNS provider has a different API for creating the required TXT record. Certbot has more than a dozen plugins for different providers, and the list is growing. It shouldn't be nginx's job to keep track of all these third-party APIs.

It would also be unreasonable to tell everyone to move their domains to a handful of giants like AWS and Cloudflare who already control so much of the internet, just so they could get certificates with DNS-01. I like my DNS a bit more decentralized than that.

sureglymop 2 days ago | parent | next [-]

That is true and it is annoying. They should really just support RFC 2136 instead of building their own APIs. Lego also supports this and pretty much all DNS servers have it implemented. At least I can use it with my own DNS server...

https://datatracker.ietf.org/doc/html/rfc2136

cpach 2 days ago | parent | prev [-]

This is a very good point.

I wonder what a good solution to this would be? In theory, Nginx could call another application that handles the communication with the DNS provider, so that the user can tailor it to their needs. (The user could write it in Python or Go or whatever.) Not sure how robust that would be though.

uncleJoe 2 days ago | parent | prev | next [-]

no need to wait: https://en.angie.software/angie/docs/configuration/modules/h...

(angie is the nginx fork lead by original nginx developers that left f5)

tmcdos 2 days ago | parent [-]

What are the main differences between Angie and freenginX.org ?

rfmoz 2 days ago | parent | prev | next [-]

The problem with DNS-01 is that you can only use one delegation a time. I mean, if you configure a wildcard cert with _acme-challenge.example.com in Google, you couldn't use it in Cloudflare, because it uses a single DNS authorization label (subdomain).

The solution has been evolving along these years and currently the las IETF draft is https://datatracker.ietf.org/doc/draft-ietf-acme-dns-account...

The new proposal brings the dns-account-01 challenge, incorporating the ACME account URL into the DNS validation record name.

clvx 2 days ago | parent | prev | next [-]

But you have to have your dns api key loaded and many dns providers don’t allow api keys per zone. I do like it but a compromise could be awful.

qwertox 2 days ago | parent | next [-]

You can make the NS record for the _acme-challenge.domain.tld point to another server which is under your control, that way you don't have to update the zone through your DNS hoster. That server then only needs to be able to resolve the challenges for those who query.

jacooper 2 days ago | parent [-]

How?

andreashaerter 2 days ago | parent | next [-]

CNAMEs. I do this for everything. Example:

1. Your main domain is important.example.com with provider A. No DNS API token for security.

2. Your throwaway domain in a dedicated account with DNS API is example.net with provider B and a DNS API token in your ACME client

3. You create _acme-challenge.important.example.com not as TXT via API but permanent as CNAME to _acme-challenge.example.net or _acme-challenge.important.example.com.example.net

4. Your ACME client writes the challenge responses for important.example.com into a TXT at the unimportant _acme-challenge.example.net and has only API access to provider B. If this gets hacked and example.net lost you change the CNAMES and use a new domain whatever.tld as CNAME target.

acme.sh supports this (see https://github.com/acmesh-official/acme.sh/wiki/DNS-alias-mo... this also works for wildcards as described there), most ACME clients do.

I also wrote an acme.sh Ansible role supporting this: https://github.com/foundata/ansible-collection-acmesh/tree/m.... Example values:

  [...]
  # certificate: "foo.example.com" with an additional "bar.example.com" SAN
  - domains:
    - name: "foo.example.com"
      challenge:  # parameters depend on type
        type: "dns"
        dns_provider: "dns_hetzner"
        # CNAME _acme-challenge.foo.example.com => _acme-challenge.foo.example.com.example.net
        challenge_alias: "foo.example.com.example.net"
    - name: "bar.example.com"
      challenge:
        type: "dns"
        dns_provider: "dns_inwx"
        # CNAME _acme-challenge.bar.example.com => _acme-challenge.example.net
        challenge_alias: "example.net"
  [...]
theschmed 2 days ago | parent | next [-]

Thank you for this clear explanation.

teruakohatu 2 days ago | parent | prev [-]

This has blown my mind. Its been a constant source of frustration since Cloudflare stubbornly refuses to allow non-enterprise accounts to have a seperate key per zone. The thread requesting it is a masterclass in passive aggressiveness:

https://community.cloudflare.com/t/restrict-scope-api-tokens...

Jnr 2 days ago | parent | next [-]

When setting up the API key, use the "Select zones to include or exclude." section. Works fine on the free account.

teruakohatu 2 days ago | parent [-]

I should have clarified, you can’t for subdomains on a non-enterprise account.

Kovah 2 days ago | parent | prev [-]

Could you elaborate on the separate key per zone issue? It's possible to create different API keys which have only access to a specific zone, and I'm a non-enterprise user.

johnmaguire 2 days ago | parent [-]

This allows you to restrict it to a domain (e.g. example.com) but not a sub-domain of that domain.

Kovah 2 days ago | parent [-]

Ah I see, thanks for the clarification!

bwann 2 days ago | parent | prev | next [-]

I used the acme-dns server (https://github.com/joohoi/acme-dns) for this. It's basically a mini DNS server with a very basic API backed with sqlite. All of my acme.sh instances talk to it to publish TXT records, and accepts queries from the internet for those TXT records.

There's a NS record so *.acme-dns.example.com delegates requests to it, so each of my hosts that need a cert have a public CNAME like _acme-challenge.www.example.com CNAME asdfasf.acme-dns.example.com which points back to the acme-dns server.

When setting up a new hostname/certificate, a REST request is sent to acme-dns to register a new username/password/subdomain which is fed to acme.sh. Then every time acme.sh needs to issue/renew the certificate it sends the TXT info to the internal acme-dns server, which in turn makes it available to the world.

dwood_dev 2 days ago | parent | prev | next [-]

Usually you just CNAME it.

You can cname _acme-challenge.foo.com to foo.bar.com.

Now, if when you do the DNS challenge, you make a TXT at foo.bar.com with the challenge response, through CNAME redirection, the TXT record is picked up as if it were directly at _acme-challenge.foo.com. You can now issue wildcard certs for anything for foo.com.

I have it on my backlog to build an automated solution to this later this year to handle this for hundreds of individual domains and then put the resulting certificates in AWS secrets manager.

I'm going to also see if I can make some sort of ACME proxy, so internal clients authenticate to me, but they cant control dns, so I make the requests on their behalf. We need to get prepared for ACME everywhere. In May 2026, its 200 day certs, it only goes down from there.

qwertox 2 days ago | parent | prev | next [-]

In my case I have a very small nameserver at ns.example.com. So I set the NS record for _acme-challenge.example.com to ns.example.com.

An A-record lookup for ns.example.com resolves to the IP of my server.

This server listens on port 53. It is a custom, small Python server using `dnslib`, which also listens on port let's say 8053 for incoming HTTPS connections.

In certbot I have a custom handler, which, when it is passed the challenge for the domain verification, sends the challenge information via HTTPS to ns.example.com:8053/certbot/cache. The small DNS-server then stores it and waits for a DNS query on port 53 for that challenge to come in, and if it does, it serves it that challenge's TXT record.

  elif qtype == 'TXT':
    if qname.lower().startswith('_acme-challenge.'):
      domain = qname[len('_acme-challenge.'):].strip('.').lower()
      if domain in storage['domains']:
        for verification_code in storage['domains'][domain.lower()]:
          a.add_answer(*dnslib.RR.fromZone(qname + " 30 IN TXT " + verification_code))
The certbot hook looks like this

   #!/usr/bin/env python3
   
   import ...

   r = requests.get('https://ns.example.com:8053/certbot/cache?domain='+urllib.parse.quote(os.environ['CERTBOT_DOMAIN'])+'&validation-code='+urllib.parse.quote(os.environ['CERTBOT_VALIDATION']))
That one nameserver-instance and hook can be used for any domain and certificate, so it is not just limited to the example.com-domain, but can also deal with challenges for let's say a *.testing.other-example.com wildcard certificate.

And since it already is a nameserver, it might as well serve the A records for dev1.testing.other-example.com, if you've set the NS record for testing.other-example.com to ns.example.com.

cherry_tree 2 days ago | parent | prev [-]

https://cert-manager.io/docs/configuration/acme/dns01/#deleg...

yupyupyups 2 days ago | parent | prev | next [-]

It's time for DNS providers to start supporting TSIG + key management. This is a standardized way to manipulate DNS records, and has a very granular ACL.

We don't need 100s of custom APIs.

https://en.m.wikipedia.org/wiki/TSIG

reactordev 2 days ago | parent [-]

The whole point is to abstract that from the users so they don’t know it’s a giant flat file. Selling a line at a time for $29.99. (I joke, obviously)

withinboredom 2 days ago | parent [-]

Digital Ocean DNS is free (it’s the only reason I have an account there)

immibis 2 days ago | parent | prev | next [-]

General note: your DNS provider can be different from your registrar, even though most registrars are also providers, and you can be your own DNS provider. The registrar is who gets the domain name under your control, and the provider is who hosts the nameserver with your DNS records on it.

qwertox 2 days ago | parent [-]

Yes, and you can be your own DNS provider only for the challenges, everything else can stay at your original DNS provider.

bananapub 2 days ago | parent | prev | next [-]

no you don't, you can just run https://github.com/joohoi/acme-dns anywhere, and then CNAME _acme_challenge.realdomain.com to aklsfdsdl239072109387219038712.acme-dns.anywhere.com. then your ACME client just talks to the ACME DNS api, which let's it do nothing at all aside from deal with challenges for that one long random domain.

Arnavion 2 days ago | parent | next [-]

You can do it with an NS record, ie _acme_challenge.realdomain.com pointing to the DNS server that you can program to serve the challenge response. No need to make a CNAME and involve an additional domain in the middle.

aflukasz 2 days ago | parent [-]

Yeah, but then you can just as well use http-01 with like same effort.

gruez 2 days ago | parent [-]

no, because dns supports wildcard certificates, unlike http.

cpach 2 days ago | parent | next [-]

dns-01 is also good for services on a private network.

aflukasz 2 days ago | parent | prev [-]

Ah, good point.

8organicbits 2 days ago | parent | prev | next [-]

There's a SaaS version as well, if you don't want to self-host.

https://docs.certifytheweb.com/docs/dns/providers/certifydns...

rglullis 2 days ago | parent | prev [-]

I've been hoping to get ACME challenge delegation on traefik working for years already. The documentation says it supports it, but it simply fails every time.

If you have any idea how this tool would work on a docker swarm cluster, I'm all ears.

grim_io 2 days ago | parent | prev | next [-]

Sounds like a DNS provider problem. Why would Nginx feel the need to compromise because of some 3rd party implementation detail?

toomuchtodo 2 days ago | parent [-]

Because users would pick an alternative solution that meets their needs when they don't have leverage or ability to change DNS provider. Have to meet users where they are when they have options.

UltraSane 2 days ago | parent | prev | next [-]

This concerned me greatly so I use AWS Route53 for DNS and use an IAM policy that only allows the key to work from specific IP addresses and limit it to only create and delete TXT records for a specific record set. I love when I can create exactly the permissions I want.

AWS IAM can be a huge pain but it can also solve a lot of problems.

https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_p...

https://repost.aws/questions/QU-HJgT3V0TzSlizZ7rVT4mQ/how-do...

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/sp...

ddtaylor 2 days ago | parent | prev | next [-]

It's a bit of a pain in the ass, but you can actually just publish the DNS records yourself. It's clear they are on the way out though as I believe it's only a 30 day valid certificate or something.

I use this for my Jellyfin server at home so that anyone can just type in blah.foo regardless of if their device supports anything like mDNS, as half the devices claim to support it but do not correctly.

fmajid 2 days ago | parent | prev | next [-]

My company's DNS provider doesn't even have an API so I delegated to a subdomain, hosted it on PowerDNS, and used Lego to automate the ACME.

quicksilver03 2 days ago | parent | prev | next [-]

Is having one key per zone worth paying money for? It's on the list of features I'd like to implement for PTRDNS because it makes sense for my own use case, but I don't know if there's enough interest to make it jump to the top of this list.

hashworks 2 days ago | parent | prev | next [-]

If you host a hidden primary yourself you get that easily.

Sesse__ 2 days ago | parent [-]

Many DNS providers also don't support having an external primary.

alanpearce 2 days ago | parent | next [-]

Hurricane Electric support a hidden primary as part of their free DNS nameserver service (do you actually want to expose your primary when someone else can handle the traffic?)

https://dns.he.net

Sesse__ 2 days ago | parent [-]

Yup, but it's a bit of a dance for bootstrapping, since they require you to already have delegated to them, but some TLDs require all NSes to be in sync and answer for the domain before delegating…

nulbyte 2 days ago | parent | prev | next [-]

Do most of them let you add an NS record?

qwertox 2 days ago | parent [-]

And if they don't, you might consider switching to Cloudflare for DNS hosting.

rfmoz 2 days ago | parent | prev [-]

Give a try to DNSMadeEasy or RcodeZero

xiconfjs 2 days ago | parent | prev [-]

if even PowerDNS doesn‘t support it :(

tok1 2 days ago | parent [-]

True for API but you can do DynDNS updates (RFC 2136), TSIG-authenticated on a per-zone basis. [1]

Can even be controlled quite granularly with a Lua-based updatepolicy, if you want e.g. restricting to only the ACME TXT records. [2]

[1] https://doc.powerdns.com/authoritative/dnsupdate.html

[2] https://github.com/PowerDNS/pdns/wiki/Lua-Examples-(Authorit...

chaz6 2 days ago | parent | prev | next [-]

One of Traefik's shortcomings with ACME is that you can only use one api key per DNS provider. This is problematic if you want to restrict api keys to a domain, or use domains belonging to two different accounts. I hope Nginx will not have the same constraint.

mholt 2 days ago | parent | next [-]

This is one of the main reasons Caddy stopped using lego for ACME and I wrote our own ACME stack.

navigate8310 2 days ago | parent | prev [-]

You can use CNAME to handle multiple DNS challenge providers. https://doc.traefik.io/traefik/reference/install-configurati...

samgranieri 2 days ago | parent | prev | next [-]

I use dns01 in my homelab with step-ca with caddy. It's a joy to use

reactordev 2 days ago | parent [-]

+1 for caddy. nginx is so 2007.

darkwater 2 days ago | parent | next [-]

Caddy is just for developers that want to publish/test the thing they write. For power users or infra admins, nginx is still much more valuable. And yes, I use Caddy in my home lab and it's nice and all but it's not really flexible as nginx is.

reactordev 2 days ago | parent | next [-]

Caddy is in use here in production. 14M requests an hour.

mholt 2 days ago | parent [-]

Where's that if I may ask?

reactordev 2 days ago | parent [-]

Trust me, you don’t want to know. Just know - it’s working great and thank you. GovCloud be dragons.

j-krieger 2 days ago | parent | prev [-]

We use Caddy across hundreds of apps with 10s of millions of requests per day in production.

mholt 2 days ago | parent [-]

Oooh. Can you tell me more about this?

reactordev 2 days ago | parent | next [-]

In case people are wondering, this is the author of Caddy.

He’s curious where it’s being used outside of home labs and in small shops. Matt, it’s fantastic software and will only get better as go improves.

I used it in a proxy setup for ingress to kubernetes that’s overlayed across multiple clouds - for the government (prior admin, this admin killed it). I can’t tell you more information than that. Other than it goes WWW -> ALB -> Caddy Cluster * Other Cloud -> K8s Router -> K8s pod -> Fiber Golang service. :chefs kiss:

When a pod is registered to the K8s router, we fire off a request to the caddy cluster to register the route. Bam, we got traffic, we got TLS, we got magic. No downtime.

reactordev 2 days ago | parent [-]

I almost forgot. Matt. We added a little sugar to Caddy for our cluster. Hashicorp's memberlist. So we can sync the records. It worked great. Sadly, I can't share it but it's rather trivial to implement.

mholt a day ago | parent [-]

Wonderful info, and feedback -- thank you so much. Happy that it works for you!

j-krieger 2 days ago | parent | prev [-]

Sure. University / Government sector. I know quite some unis/projects in that field that switched to caddy, since gigantic ip ranges and deep subdomains with stakeholders of many different classes have certain PKI requirements and caddy makes using ACME easy. We deploy a self serving tool where people can generate EAB-Ids and Hmac keys for a sub domain they own.

Complex root domain routing and complex dynamic rewrite logic remains behind Apache/NginX/HaProxy, a lot of apps are then served in a container architecture with Caddy for easy cert renewal without relying on hacky certbot architectures. So we don't really serve that much traffic with just one instance. Also, a lot of our traffic is bots. More than one would think.

The basic configuration being tiny makes it the perfect fit for people with varying capabilities and know how when it comes to devops. As a devops engineer, I enjoy the easy integration with tailscale.

mholt a day ago | parent [-]

Thank you, this is amazing feedback/info. Yeah, we think the Tailscale integration is pretty neat too!

RadiozRadioz 2 days ago | parent | prev | next [-]

So a tool's value should be judged as inversely proportional to its age?

reactordev 2 days ago | parent | next [-]

A tools value is in the eye of the beholder. Nginx has ceased being valuable to me when they decided to change licenses, go private equity, not adapt to orchestration needs, ignore http standards, and not release meaningful updates in a decade.

yjftsjthsd-h 2 days ago | parent | next [-]

> when they decided to change licenses,

https://github.com/nginx/nginx/blob/master/LICENSE looks like a nice normal permissive license. I don't care that there's a premium version if all the features I want are in the OSS version.

jcgl 2 days ago | parent | prev [-]

Private equity? Either there’s a story I’m missing, or you’re mischaracterizing F5 as PE.

reactordev a day ago | parent [-]

Lookup Angie, freenginx, and the whole Rambler / F5 fiasco. Moscow feds involved and forced exploitation for profit.

mholt 2 days ago | parent | prev [-]

Maybe inversely proportional to how much the ecosystem moves around it.

supriyo-biswas 2 days ago | parent | prev [-]

Only if they'd get the K8s ingress out of the WIP phase; I can't wait to possibly get rid of the cert-manager and ingress shenanigans you get with others.

reactordev 2 days ago | parent | next [-]

Yup. I can’t wait for the day I can kill my caddy8s service.

The best thing about caddy is the fact you can reload config, add sites, routes, without ever having to shutdown. Writing a service to keep your orchestration platform and your ingress in sync is meh. K8s has the events, DNS service has the src mesh records, you just need a way to tell caddy to send it to your backend.

The feature should be done soon but they need to ensure it works across K8s flavors.

01HNNWZ0MV43FF 2 days ago | parent | next [-]

I think you can that with Nginx too, but the SWAG wrapper discourages it for some reason

pushrax 2 days ago | parent | prev [-]

just send sighup to nginx and it will reload all the config—there's very few settings that require a restart

reactordev 2 days ago | parent [-]

Sure, how, from the container? The host it’s on? Caddy exposes this as an api.

ilogik 2 days ago | parent | prev [-]

Traefik seems to be ok for us

attentive 2 days ago | parent | prev | next [-]

Yes, ACME-DNS please - https://github.com/joohoi/acme-dns

Lego supports it.

Spivak 2 days ago | parent | prev | next [-]

I don't even know why anyone wouldn't use the DNS challenge unless they had no other option. I've found it to be annoying and brittle, maybe less so now with native web server support. And you can't get wildcards.

cortesoft 2 days ago | parent | next [-]

My work is mostly running internal services that aren’t reachable from the external internet. DNS is the only option.

You can get wildcards with DNS. If you want *.foo.com, you just need to be able to set _acme-challenge.foo.com and you can get the wildcard.

filleokus 2 days ago | parent | next [-]

Spivak is saying that the DNS method is superior (i.e you are agreeing - and I do too).

One reason I can think of for HTTP-01 / TLS-ALPN-01 is on-demand issuance, issuing the certificate when you get the request. Which might seem insane (and kinda is), but can be useful for e.g crazy web-migration projects. If you have an enormous, deeply levelled, domain sprawl that are almost never used but you need it up for some reason it can be quite handy.

(Another reason, soon, is that HTTP-01 will be able to issue certs for IP addresses: https://letsencrypt.org/2025/07/01/issuing-our-first-ip-addr...)

cortesoft 2 days ago | parent [-]

Oh I totally misread the comment.

Nevermind, I agree!

Sharparam 2 days ago | parent [-]

The comment is strangely worded, I too had to read it over a couple of times to understand what they meant.

bryanlarsen 2 days ago | parent | prev | next [-]

> DNS is the only option

DNS and wildcards aren't the only options. I've done annoying hacks to give internal services an HTTPS cert without using either.

But they're the only sane options.

cyberax 2 days ago | parent | prev | next [-]

One problem with wildcards is that any service with *.foo.com can pretend to be any other service. This is an issue if you're using mutual TLS authentication and want to trust the server's certificate.

It'd be nice if LE could issue intermediary certificates constrained to a specific domain ( https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1.... ).

2 days ago | parent | prev [-]
[deleted]
bityard 2 days ago | parent | prev | next [-]

The advantage to HTTP validation is that it's simple. No messing with DNS or API keys. Just fire up your server software and tell it what your hostname is and everything else happens in the background automagically.

abcdefg12 2 days ago | parent [-]

And you have two or more servers serving this domain you’re out of luck

lmz 2 days ago | parent | next [-]

And this is different from DNS how exactly? The key and resulting cert still needs to be distributed among your servers no matter which method is used.

cpach 2 days ago | parent [-]

With dns-01, multiple servers could, independently of each other, fetch a certificate for the same set of hostnames. Not sure if it’s a good idea though.

lmz a day ago | parent [-]

Multiple keys and certs for the same hostname? Will the CA even issue that?

cpach a day ago | parent [-]

I guess it depends on the CA, but some do. Let’s Encrypt does, for example. I guess it’s useful for HA deployments, where load balancers might be spread out across multiple datacenters and stuff like that.

NB that rate limits apply https://letsencrypt.org/docs/rate-limits/

account42 2 days ago | parent | prev [-]

Not really, just forward .well-known/acme-challenge/* requests to a single server or otherwise make sure that the challenge responses are served from all instances.

jeroenhd 2 days ago | parent | prev | next [-]

If you buy your domain with a bottom-of-the-barrel domain reseller and then not pay for decent DNS, you don't have the option.

Plus, it takes setting up an API key and most of the time you don't need a wildcard anyway.

account42 2 days ago | parent [-]

You don't need API access to your DNS, the ability to delegate the ACME challenge records to your own server is also enough.

Dylan16807 2 days ago | parent | prev | next [-]

I don't know how to make my server log into my DNS, and I don't particularly want to learn how. Mapping .well-known is one line of config.

Wildcards are the only temptation.

account42 2 days ago | parent [-]

Just like you can point .well-known/acme-challenge/ to a writable directory you can also delegate the relevant DNS keys to a name server that you can more easily update.

Dylan16807 2 days ago | parent [-]

Now you want me to rent or install at least two name servers, and then configure them, and then teach my web server how to send them rules?

That's so much more work than either of the options in my first comment. Aliasing a directory takes about one minute.

account42 2 days ago | parent | prev [-]

> I've found it to be annoying and brittle

How so? It's just serving static files.

aoe6721 2 days ago | parent | prev | next [-]

Switch to Angie then. It supports DNS-01 very well.

klysm 2 days ago | parent | prev | next [-]

How does NGINX fit into that though?

geek_at a day ago | parent [-]

I am using a bash script on my vps to get a wildcard certificate and just scp the cert to my other reverse proxies. Some using nginx but some Caddy or traefik

Wrote an article how to set it up https://blog.haschek.at/2023/letsencrypt-wildcard-cert.html

altairprime 2 days ago | parent | prev | next [-]

Does DNS-01 support DNS-over-HTTPS to the registered domain name servers? If so, then it should be extremely simple to extend nginx to support DNS claims; if not, perhaps DNS-01 needs improvements.

cpach 2 days ago | parent [-]

When placing the order, you get a funny text string from the ACME provider. You need to create a TXT record that holds this value. How you create the TXT record is up to you and your DNS server – the ACME provider doesn’t care.

I don’t believe DNS-over-HTTPS is relevant in this context. AFAIK, it’s used by clients who want to query a DNS server, and not for an operator who wants to create a DNS record. (Please correct me if I’m wrong.)

0x0000000 2 days ago | parent [-]

The ACME provider makes a query to the DNS server to validate the record exists and contains the right "funny string". Parent's question was whether that query is/can be made via DoH.

cpach 2 days ago | parent [-]

Perhaps I have poor imagination, but I fail to see why why it would matter?

0x0000000 2 days ago | parent [-]

Because nginx, as an HTTP server, could answer the query?

Arrowmaster 2 days ago | parent | next [-]

You want to build a DNS server into nginx so you can respond to DoH query's for the domain you are hosting on that nginx server?

Let's ignore that DoH is a client oriented protocol and there's no same way to only run a DoH server without an underlying DNS server. How do you plan to get the first certificate so the query to the DoH server doesn't get rejected for invalid certificate?

xg15 2 days ago | parent | prev [-]

At that point you might as well use the HTTP-01 challenge. I think the whole utility of DNS-01 is that you can use it if you don't want to expose the HTTP server to the internet.

jcgl 2 days ago | parent [-]

No, that’s just one of the use-cases. Also:

- wildcard certs. DNS-01 is a strict requirement here. - certs for a service whose TLS is terminated by multiple servers (e.g. load balancers). DNS-01 is a practical requirement here because only one of the terminating servers would be able to respond during an HTTP or ALPN challenge.

account42 2 days ago | parent | next [-]

> DNS-01 is a practical requirement here because only one of the terminating servers would be able to respond during an HTTP or ALPN challenge.

Reverse-proxying or otherwise forwarding requests for .well-known/acme-challenge/ to a single server should be just as easy to set up as DNS-01.

jcgl 2 days ago | parent [-]

But then you have to redistribute the cert from that single server to all the others. Which, yes, can be done. But then you've gotta write that glue yourself. What's more, you've now chosen a special snowflake server on whom renewals depend.

In other words, no, it's not just as easy as setting up DNS-01. Different operational characteristics, and a need for bespoke glue code.

xg15 2 days ago | parent [-]

> But then you have to redistribute the cert from that single server to all the others.

Wouldn't you have to do that anyway? Or is the idea that each server requests and renews a separate cert for itself? That sounds as if you'd have to watch out for multiple servers stepping on each other's toes during the DNS-01 challenge, if there is ever a situation where two or more servers want to renew their cert at the same time.

cpach 2 days ago | parent [-]

Yup. There’s an RFC draft that addresses this dilemma.

https://datatracker.ietf.org/doc/draft-ietf-acme-dns-account...

jcgl 2 days ago | parent [-]

Afaiu, that's only a problem for trying to _delegate_ to multiple clients. But routine operation with multiple clients works just fine in my experience (doing multi-region load balancing). Multiple TXT records are created, I think (speaking off the top of my head).

cpach a day ago | parent [-]

Ah! I stand corrected.

jcgl a day ago | parent [-]

I wanted to quickly double-check my (albeit limited) experience against docs. The RFC[0] implies the possibility of what I described (provided a well-behaved ACME client that doesn't clobber other TXT records):

   2.  Query for TXT records for the validation domain name
   
   3.  Verify that the contents of one of the TXT records match the
       digest value
And then the certbot docs[2] show how it's a well-behaved client that wouldn't clobber TXT records from concurrent instances:

> You can have multiple TXT records in place for the same name. For instance, this might happen if you are validating a challenge for a wildcard and a non-wildcard certificate at the same time. However, you should make sure to clean up old TXT records, because if the response size gets too big Let’s Encrypt will start rejecting it. > ... > It works well even if you have multiple web servers.

That bit about "multiple webservers" is a little ambiguous, but I think the preceding line indicates clearly enough how everything is supposed to work.

[0] https://datatracker.ietf.org/doc/html/rfc8555#section-8.4

[1] https://letsencrypt.org/docs/challenge-types/#dns-01-challen...

xg15 2 days ago | parent | prev [-]

Ah, that makes sense. Thanks!

creatonez 2 days ago | parent | prev [-]

Why would nginx ever need support for the DNS-01 challenge type? It always has access to `.well-known` because nginx is running an HTTP server for the entire lifecycle of the process, so you'd never need to use a lower level way of doing DV. And that seems to violate the principle of least privilege, since you now need a sensitive API token on the server.

0x457 2 days ago | parent | next [-]

Because while Nginx always has access to .well-known, thing that validates on issuer side might not. I use DNS challenge to issue certificates for domains that resolve to IPs in my overlay network.

The issue is that supporting dns-01 is just supporting dns-01 it's providing a common interface to interact with different providers that implement dns-01.

petee 2 days ago | parent [-]

dns-01 is just a challenge; which api or dns update system should nginx support then? Some API, AFXR, or UPDATE?

I think this is kinda the OPs point, nginx an http server, why should it be messing with dns? There are plenty of other acme clients to do this with ease

2 days ago | parent | next [-]
[deleted]
0x457 a day ago | parent | prev [-]

I mean, you just repeated my explanation why supporting dns-01 in nginx isn't straightforward has http-01. I've explained why dns-01 challenge is still useful and might be required for some users.

petee a day ago | parent [-]

I misread your first paragraph, and was more responding to the second that I took as supporting the adding the dns implementation in reply to the OP.

It may still be required by some users, but I don't think that it makes sense for nginx

0x457 13 hours ago | parent [-]

> I took as supporting the adding the dns implementation

Well, I am supporting it, but I pointed why it's not as straightforward as supporting http-01.

> I don't think that it makes sense for nginx

It makes sense for nginx because ultimately I don't make certificates just for the fun of it, I do it to give it to some HTTP server. So it makes sense.

However, this isn't a future that will be not used by paid users, and F5 seems to be opposing making OSS version users lives better.

justusthane 2 days ago | parent | prev | next [-]

You can’t use HTTP-01 if the server running nginx isn’t accessible from the internet. DNS-01 works for that.

chrismorgan 2 days ago | parent | prev | next [-]

Wildcard certificates are probably the most important answer: they’re not available via HTTP challenge.

abcdefg12 2 days ago | parent | prev | next [-]

Because you might have more than one server serving this domain

lukeschlather 2 days ago | parent | prev [-]

Issuing a new certificate with the HTTP challenge pretty much requires you allow for 15 minutes of downtime. It's really not suitable for any customer-facing endpoint with SLAs.

kijin 2 days ago | parent | next [-]

Only if you let certbot take down your normal nginx and occupy port 80 in standalone mode. Which it doesn't need to, if normal nginx can do the job by itself.

When I need to use the HTTP challenge, I always configure the web server in advance to serve /.well-known/ from a certain directory and point certbot at it with `certbot certonly --webroot-path`. No need to take down the normal web server. Graceful reload. Zero downtime. Works with any web server.

chrismorgan 2 days ago | parent | prev | next [-]

Sounds like you’re doing it wrong. I don’t know about this native support, but I’d be very surprised if it was worse than the old way, which could just have Certbot put files in a path NGINX was already serving (webroot method), and then when new certificates are done send a signal for NGINX to reload its config. There should never be any downtime.

kijin 2 days ago | parent [-]

Certbot has a "standalone" mode that occupies port 80 and serves /.well-known/ by itself.

Whoever first recommended using that mode in anything other than some sort of emergency situation needs to be given a firm kick in the butt.

Certbot also has a mode that mangles your apache or nginx config files in an attempt to wire up certificates to your virtual hosts. Whoever wrote the nginx integration also needs a butt kick, it's terrible. I've helped a number of people fix their broken servers after certbot mangled their config files. Just because you're on a crusade to encrypt the web doesn't give you a right to mess with other programs' config files, that's not how Unix works!

jofla_net 2 days ago | parent | next [-]

Also, whoever decided that service providers were no longer autonomous to determine the expiration times of their own infrastructure's certificates should get that boot-to-the-head as well.

It is not as if they couldn't already choose (to buy) such short lifetimes already.

Authoritarianism at its finest.

jeltz 2 days ago | parent | prev | next [-]

Certbot also fights automation and provisioning with e.g. Andible by modifying config files to remember command line options if you ever need to do anything manually in an emergency.

It is a terrible piece of software. I use dehydrated which I'd much friendlier to automation.

tomku 2 days ago | parent | prev [-]

Those choices and Certbot strongly encouraging snap installation was enough to get me to switch to https://go-acme.github.io/lego/, which I've been very happy with since. It's very stable and feels like it was built by people who actually operate servers.

Kwpolska 2 days ago | parent | prev | next [-]

Where would this downtime come from? Your setup is really badly configured if you need downtime to serve a new static file.

2 days ago | parent | prev [-]
[deleted]