Remix.run Logo
pixl97 4 days ago

Heh, working with a number of large companies I've seen most of them moving to internally signed certs on everything because of ever shortening expiration times. They'll have public certs on edge devices/load balancers but internal services with have internal CA signed certs with long expire times because of the number of crappy apps that make using certs a pain in the ass.

plorkyeran 4 days ago | parent | next [-]

This is a desired outcome. The WebPKI ecosystem would really like it if everyone stopped depending on them for internal things because it's actually a pretty different set of requirements. Long-lived certs with an internal CA makes a lot of sense and is often more secure than using a public CA.

tetha 3 days ago | parent | next [-]

Our internally provided certs of various CAs have a TTL of 72 hours and should be renewed every 48 hours.

It's been a huge pain as we have encountered a ton of bugs and missing features in libraries and applications to reload certs like this. And we have some really ugly workarounds in place, because some applications place a "reload a consul client" on the same level of "reload all config, including opening new sockets, adjusting socket parameters, doing TCP connection handover" - all to rebuild a stateless client throwing a few parameters at a standard http client. But oh well.

But I refuse to back down. Reload your certs and your secrets. If we encounter a situation in which we have to mass-revoke and mass-reissue internal certs, it'll be easy for those who do. I don't have time for everyone else.

donnachangstein 3 days ago | parent | next [-]

> Our internally provided certs of various CAs have a TTL of 72 hours and should be renewed every 48 hours.

Do you promise to come back and tell us the story about when someone went on vacation and the certs issued on a Thursday didn't renew over the weekend and come Monday everything broke and no one could authenticate or get into the building?

kam 3 days ago | parent | next [-]

At least that sounds like it would be a more interesting story than the one where the person who quit a year ago didn't document all the places they manually installed the 2-year certificate.

tetha 3 days ago | parent | prev | next [-]

I will. We've been betting Postgres connectivity for a few hundred applications on this over the past three years. If this fucks up, it'll be known without me.

donnachangstein 3 days ago | parent [-]

I'm curious what requirement drove you to such arbitrarily small TTL, other than "because we can" dick-measuring geekery.

I applaud you for sticking to your guns though.

tetha 3 days ago | parent [-]

At the end of the day, we were worried about exactly these issues - if an application has to reload certs once every 2 years, it will always end up a mess.

And the conventional wisdom for application management and deployments is - if it's painful, do it more. Like this, applications in the container infrastructure are forced to get certificate deployment and reloading right on day 1.

And yes, some older application that were migrated to the infrastructure went ahead and loaded their credentials and certificates for other dependencies into their database or something like that and then ended up confused when this didn't work at all. Now it's fixed.

wbl 3 days ago | parent | prev [-]

Why would the cert renewal be manual?

alexchamberlain 3 days ago | parent [-]

That's how it used to be done. Buy a certificate with a 2 year expiry and manually install it on your server (you only had 1; it was fine).

progmetaldev 3 days ago | parent [-]

I can tell you that there are still quite a few of us out here that are doing the once a year manual renewal. I have suggested a plan to use Let's Encrypt with automated renewal, but for some companies, they are using old technology and/or old processes that "seniors" are comfortable with since they understand them and suggesting a better process isn't always looked favorably upon (especially if your job relies on the manual renewal process as one of those cryptic things only IT can do).

tptacek 3 days ago | parent | prev | next [-]

Some of this rhymes with Colm MacCárthaigh's case against mTLS.

https://news.ycombinator.com/item?id=25380301

OptionOfT 3 days ago | parent | prev [-]

This has been our issue too. We've had mandates for rotating OAuth secrets (client ID & client secret).

Except there are no APIs to rotate those. The infrastructure doesn't exist yet.

And refreshing those automatically does not validate ownership, unlike certificates where you can do a DNS check or an HTTP check.

Microsoft has some technology that next to these tokens they also have a per-machine certificate that is used to sign requests, and those certificates can't leave the machine.

parliament32 3 days ago | parent [-]

We've also felt the pain for OAuth secrets. Current mandates for us are 6 months.

Because we run on Azure / AKS, switching to federated credentials ("workload identities") with the app registrations made most of the pain go away because MS manages all the rotations (3 months) etc. If you're on managed AKS the OIDC issuer side is also automagic. And it's free. I think GCP offers something similar.

https://learn.microsoft.com/en-us/entra/workload-id/workload...

rlpb 3 days ago | parent | prev | next [-]

Browsers don't design for internal use though. They insist on HTTPS for various things that are intranet only, such as some browser APIs, PWAs, etc

akerl_ 3 days ago | parent [-]

As is already described by the comment thread we're replying in, "internal use" and "HTTPS" are very compatible. Corporations can run an internal CA, sign whatever internal certs they want, and trust that CA on their devices.

franga2000 3 days ago | parent | next [-]

You use the term "internal use" and "corporations" like they're interchangable, but that's definitely not the case. Lots of small businesses, other organizations or even individuals want to have some internal services and having to "set up" a CA and add the certs to all client devices just to access some app on the local network is absurd!

akerl_ 3 days ago | parent | next [-]

The average small business in 2025 is not running custom on-premise infrastructure to solve their problems. Small businesses are paying vendors to provide services, sometimes in the form of on-premise appliances but more often in the form of SaaS offerings. And I'm happy to have the CAB push those vendors to improve their TLS support via efforts like this.

Individuals are in the same boat: if you're running your own custom services at your house, you've self-identified as being in the amazingly small fraction of the population with both the technical literacy and desire to do so. Either set up LetsEncrypt or run your own ACME service; the CAB is making clear here and in prior changes that they're not letting the 1% hold back the security bar for everybody else.

JimBlackwood 3 days ago | parent | prev | next [-]

I don't think it's absurd and personally it feels easier to setup an internal CA than some of the alternatives.

In the hackiest of setups, it's a few commands to generate a CA and issue a wildcard certificate for everything. Then a single line in the bootstrap script or documentation for new devices to trust the CA and you're done.

Going a few steps further, setting up something like Hashicorp Vault is not hard and regardless of org size; you need to do secret distribution somehow.

lucb1e 3 days ago | parent | next [-]

> it's a few commands to generate a CA

My dad still calls my terminals a "DOS window" and doesn't understand why I don't use GUIs like a normal person. He has his own business. He absolutely cannot just roll out a CA for secure comms with his local printer or whatever. He literally calls me to help with buying a PDF reader

Myself, I'm employed at a small business and we're all as tech savvy as it gets. It took me several days to set it up on secure hardware (smartcard, figuring out compatibility and broken documentation), making sure I understand what all the options do and that it's secure for years to come and whatnot, working out what the procedure for issuing should be, etc. Eventually got it done, handed it over to the higher-up who gets to issue certs, distribute the CA cert to everyone... it's never used. We have a wiki page with TLS and SSH fingerprints

JimBlackwood 3 days ago | parent | next [-]

> My dad still calls my terminals a "DOS window" and doesn't understand why I don't use GUIs like a normal person. He has his own business. He absolutely cannot just roll out a CA for secure comms with his local printer or whatever. He literally calls me to help with buying a PDF reader

This is fair. I assumed all small businesses would be tech startups, haha.

Retric 3 days ago | parent | prev [-]

The vast majority of companies operate just fine without understanding anything about building codes or vehicle repair etc.

Paying experts (Ed: setting up internal infrastructure) is a perfectly viable option so the only real question is the amount of effort involved not if random people know how to do something.

lucb1e 3 days ago | parent | next [-]

Paying an expert to come set up a local CA seems rather silly when you'd normally outsource operating one to the people who professionally run a CA

Retric 3 days ago | parent [-]

You’d only need internal certificates if someone had set up internal infrastructure. Expecting that person to do a good job means having working certificates be they internal or external.

nilslindemann 3 days ago | parent | prev [-]

> Paying experts is a perfectly viable option

Congrats for securing your job by selling the free internet and your soul.

Retric 3 days ago | parent [-]

I’m not going to be doing this, but I care about knowledge being free not labor or infrastructure.

If someone doesn’t want to learn then nobody needs to help them for free.

3 days ago | parent [-]
[deleted]
disiplus 3 days ago | parent | prev | next [-]

We have this, it's not trivial for some small team, and you have to deal with stuff like conda env coming with it's own set of certs so you have to take care of that. It's better then the alternative of fighting with browsers but still it's not without extra complexity

JimBlackwood 3 days ago | parent [-]

For sure, nothing is without extra complexity. But, to me, it feels like additional complexity for whoever does DevOps (where I think it should be) and takes away complexity from all other users.

3 days ago | parent | prev | next [-]
[deleted]
msie 3 days ago | parent | prev [-]

Wow, amazing how out of touch this is.

JimBlackwood 3 days ago | parent [-]

Can you explain? I don't see why

Henchman21 3 days ago | parent [-]

You seem to think every business is a tech startup and is staffed with competent engineers.

Perhaps spend some time outside your bubble? I’ve read many of your comments and you just do seem to be caught in your own little world. “Out of touch” is apt and you should probably reflect on that at length.

JimBlackwood 3 days ago | parent [-]

> You seem to think every business is a tech startup and is staffed with competent engineers.

If we’re talking about businesses hosting services on some intranet and concerned about TLS, then yes, I assume it’s either a tech company or they have at least one competent engineer to host these things. Why else would the question be relevant?

> “Out of touch” is apt and you should probably reflect on that at length.

That’s a very weird personal comment based on a few comments on a website that’s inside a tech savvy bubble. Most people here work in IT, so I talk as if most people here work in IT. If you’re a mechanic at a garage or a lawyer at a law firm, I wouldn’t tell you rolling your own CA is easy and just a few commands.

Henchman21 2 days ago | parent [-]

You know, your perspective is valuable; I often operate as if the context is “all people everywhere”, which is rarely true and is definitely not true here. So I will take the error as mine and thank you for pointing it out :)

acedTrex 3 days ago | parent | prev [-]

Sounds like there is a market for a browser that is intranet only and doesnt do various checks

jillyboel 3 days ago | parent | next [-]

Good luck getting that distributed everywhere including the iOS app store and random samsung TVs that stopped receiving updates a decade ago.

Not to mention the massive undertaking that even just maintaining a multi-platform chromium fork is.

JimBlackwood 3 days ago | parent | prev [-]

Why would you want this? Then on production, you'll run into issues you did not encounter on staging because you skipped various checks.

jillyboel 3 days ago | parent | prev | next [-]

Getting my parents to add a CA to their android, iphone, windows laptop and macbook just so they can use my self hosted nextcloud sounds like an absolute nightmare.

The nightmare only intensifies for small businesses that allow their users to bring their own devices (yes, yes, sacrilege but that is how small businesses operate).

Not everything is a massive enterprise with an army of IT support personnel.

crote 3 days ago | parent | next [-]

Rolling out LetsEncrypt for a self-hosted Nextcloud instance is absolutely trivial. There are many reasons corporations might want to roll their own internal CA, but simple homelab scenarios like these couldn't be further from them.

jillyboel 3 days ago | parent | next [-]

Sure, which is what I do. But the point is that this is very much internal use and rolling my own CA for it is a nightmare.

GabeIsko 3 days ago | parent | prev [-]

Would you suggest something? I do this, but I'm not sure I would call maintaining my setup trivial. Got in trouble recently because my domain registrar deprecated an API call and it ends up that broke the camel's back in my automation setup. Or at least it did 90 days later.

andrewmackrodt 3 days ago | parent [-]

I'm not a nextcloud user but have a homelab and use traefik for my reverse proxy which is configured to use letsencrypt dns challenges to issue wildcard certificates. I use cloudflares free plan to manage dns for my domains, although the registrar is different. This has been a set it and forgot solution for the last several years.

GabeIsko 2 days ago | parent [-]

Let's Encrypt cert renewal comes out of the box on traefik? I haven't kept up with it. I'm on a similar set and forget schedule with configured nginx and some crowdsec stuff, but the API change ended up killing off an afternoon of my time.

andrewmackrodt 7 hours ago | parent [-]

Yep, it supports ACME (Let's Encrypt) out the box and many DNS providers too. I mainly use namecheap as my registrar but configure Cloudflare as my DNS resolver; I find this easier from a configuration perspective and CF APIs have been stable for me so far.

Traefik (by default) will attempt certificate renewal 30 days before expiry. Perhaps the defaults will change if the lifetime becomes 45 days. I don't think it's possible to override this value, without adjusting the certificate expiry days, but I've never felt the need to adjust it.

mysteria 3 days ago | parent | prev | next [-]

I actually do this for my homelab setup. Everyone basically gets the local CA installed for internal services as well as a client cert for RADIUS EAP-TLS and VPN authentication. Different devices are automatically routed to the correct VLAN and the initial onboarding doesn't take that long if you're used to the setup. Guests are issued a MSCHAP username and password for simplicity's sake.

For internal web services I could use just Let's Encrypt but I need to deploy the client certs anyways for network access and I might as well just use my internal cert for everything.

jillyboel 3 days ago | parent [-]

Personally I'd absolutely refuse to install your CA as your guest. That would give you far too much power to mint certificates for sites you have no business snooping on.

mysteria 3 days ago | parent [-]

Guests don't install my CA as they don't need to access my internal services. If I wanted to set up an internal web server that's accessible to both guests and family members I'd use Let's Encrypt for that.

richardwhiuk 3 days ago | parent | prev [-]

Why are your parents on a corporations internal network?

jillyboel 3 days ago | parent [-]

What corporation are you talking about? Have you never heard of someone self hosting software for their family and friends? You know, an intranet.

smw 3 days ago | parent | next [-]

Just buy a domain and use dns verification to get real certs for whatever internal addresses you want to serve? Caddy will trivially go get certs for you with one line of config

Or cheat and use tailscale to do the whole thing.

DiggyJohnson 3 days ago | parent | prev [-]

Self hosting doesn’t usually apply connecting on a private network usually.

stefan_ 3 days ago | parent | prev | next [-]

Do I add the root CA of my router manufacturer so I can visit its web interface on my internal network without having half the page functionality broken because of overbearing browser manufacturers who operate the "web PKI" as a cartel? This nowadays includes things such as basic file downloads.

3 days ago | parent [-]
[deleted]
ClumsyPilot 3 days ago | parent | prev | next [-]

> Corporations can run an internal CA

Having just implemented an internal CA, I can assure you, most corporations can’t just run an internal CA. Some struggle to update containers and tie their shoe laces.

lxgr 3 days ago | parent | prev | next [-]

Yeah, but essentially every home user can only do so after jumping through extremely onerous hoops (many of which also decrease their security when browsing the public web).

I’ve done it in the past, and it was so painful, I just bit the bullet and started accessing everything under public hostnames so that I can get auto-issued Letsencrypt certificates.

rlpb 3 days ago | parent | prev [-]

Indeed they are compatible. However HTTPS is often unnecessary, particularly in a smaller organisation, but browsers mandate significant unnecessary complexity there. In that sense, brwosers are not suited to this use in those scenarios.

freeopinion 3 days ago | parent | next [-]

If only browsers could understand something besides HTTPS. Somebody should invent something called HTTP that is like HTTPS without certificates.

recursive 3 days ago | parent | next [-]

Cool. And when they invent it, it should have browser parity with respect to which API features and capabilities are available, so that we don't need to use HTTPS just so things like `getUserMedia` work.

https://www.digicert.com/blog/https-only-features-in-browser...

noselasd 3 days ago | parent | prev | next [-]

There’s enough APIs limited to secure contexts that many internal apps become unfeasible.

SoftTalker 3 days ago | parent | prev [-]

Modern browsers default to trying https first.

tedivm 3 days ago | parent | prev | next [-]

I really don't see many scenarios where HTTPS isn't needed for at least some internal services.

donnachangstein 3 days ago | parent [-]

Then, I'm afraid, you work in a bubble.

A static page that hosts documentation on an internal network does not need encryption.

The added overhead of certificate maintenance (and investigating when it does and will break) is simply not worth the added cost.

Of course the workaround most shops do nowadays is just hide the HTTP servers behind a load balancer doing SSL termination with a wildcard cert. An added layer of complexity (and now single point of failure) just to appease the WebPKI crybabies.

progmetaldev 3 days ago | parent | next [-]

Unfortunately, for a small business, there are many software packages that can cause all sorts of havoc on an internal network, and are simple to install. Even just ARP cache poisoning on an internal network can force everyone offline, while even a reboot of all equipment can not immediately fix the problem. A small company that can't handle setting up a CA won't ever be able to handle exploits like this (and I'm not saying that a small company should be able to setup their own CA, just commenting on how defenseless even modern networks are to employees that like to play around or cause havoc).

Of course, then there are the employees who could just intercept HTTP requests, and modify them to include a payload to root an employee's machine. There is so much software out there that can destroy trust in a network, and it's literally download and install, then point and click with no knowledge. Seems like there is a market for simple and cheap solutions for internal networks, for small business. I could see myself making quite a bit off it, which I did in the mid-2000's, but I can't stand doing sales any more in my life, and dealing with support is a whole issue on it's own even with an automated solution.

imroot 3 days ago | parent | prev | next [-]

What overhead?

Just about every web server these days supports ACME -- some natively, some via scripts, and you can set up your own internal CA using something like step-ca that speaks ACME if you don't want your certs going out to the transparency log.

The last few companies I've worked at had no http behind the scenes -- everything, including service-to-service communications was handled via https. It's a hard requirement for just about everything financial, healthcare, and sensitive these days.

donnachangstein 3 days ago | parent [-]

> What overhead?

[proceeds to describe a bunch of new infrastructure and automation you need to setup and monitor]

So when ACME breaks - which it will, because it's not foolproof - the server securely hosting the cafeteria menus is now inaccessible, instead of being susceptible to interception or modification in transit. Because the guy that has owned your core switches is most concerned that everyone will be eating taco salad every day.

brendoelfrendo 3 days ago | parent | prev | next [-]

Sure it does! You may not need confidentiality, but what about integrity?

donnachangstein 3 days ago | parent [-]

It's a very myopic take.

Someone that has seized control of your core network such that they were capable of modifying traffic, is not going to waste precious time or access modifying the flags of ls on your man page server. They will focus on more valuable things.

Just because something is possible in theory doesn't make it likely or worth the time invested.

You can put 8 locks on the door to your house but most people suffice with just one.

Someone could remove a piece of mail from your unlocked rural mailbox, modify it and put it back. Do you trust the mail carrier as much as the security of your internal network?

But it's not really a concern worth investing resources into for most.

growse 3 days ago | parent [-]

> Someone that has seized control of your core network such that they were capable of modifying traffic, is not going to waste precious time or access modifying the flags of ls on your man page server. They will focus on more valuable things.

Ah, the "both me and my attackers agree on what's important" fallacy.

What if they modify the man page response to include drive-by malware?

tedivm 3 days ago | parent | prev [-]

I'm afraid you didn't read my response. I explicitly said I can't see a case where it isn't needed for some services. I never said it was required for every service. Once you've got it setup for one thing it's pretty easy to set it up everywhere (unless you're manually deploying, which is an obvious problem).

therealpygon 3 days ago | parent | prev [-]

And it is even more trivial in a small organization to install a Trusted Root for internally signed certificates on their handful of machines. Laziness isn’t a browser issue.

rlpb 3 days ago | parent [-]

How is that supposed to work for an IoT device that wants to work out of the box using one of these HTTPS-only browser APIs?

metanonsense 3 days ago | parent [-]

I am not saying I‘d do this, but in theory you could deploy a single reverse proxy in front of your HTTP-only devices and restrict traffic accordingly.

Spooky23 3 days ago | parent | prev | next [-]

Desired by who?

There’s nothing stopping Apple and Google from issuing themselves certificates every 10 minutes. I get no value for doing this. Building out or expanding my own PKI for my company or setting up the infrastructure to integrate with Digicert or whomever gets me zero security and business value, just cost and toil.

Revocation is most often an issue when CAs fuck up. So now we collectively need to pay to cover their rears.

crote 3 days ago | parent [-]

CAs fucking up every once in a while is inevitable. It is impossible to write guaranteed bug-free software or train guaranteed flawless humans.

The big question is what happens when (not "if") that happens. Companies have repeatedly shown that they are unable to rotate certs in time, to the point of even suing CAs to avoid revocation. They've been asked nicely to get their shit together, and it hasn't happened. Shortening cert lifetime to force automation is the inevitable next step.

Spooky23 3 days ago | parent [-]

Silly me, I’m just a customer, incapable of making my own risk assessments or prioritizing my business processes.

You’re portraying people suing CAs to get injunctions to avoid outages as clueless or irresponsible. The fact is Digicert’s actions, dictated by this CA/Browser forum were draconian and over the top responses to a minor risk. This industry trade group is out of control.

End of the day, we’re just pushing risk around. Running a quality internal PKI is difficult.

christina97 3 days ago | parent | prev | next [-]

What do you mean “WebPKI … would like”. The browser vendors want one thing (secure, ubiquitous, etc), the CAs want a very different thing (expensive, confusing, etc)…

ozim 4 days ago | parent | prev [-]

Problem is browsers will most likely follow the enforcement of short certificates so internal sites will be affected as well.

Non browser things usually don’t care even if cert is expired or trusted.

So I expect people still to use WebPKI for internal sites.

akerl_ 3 days ago | parent | next [-]

The browser policies are set by the same entities doing the CAB voting, and basically every prior change around WebPKI has only been enforced by browsers for CAs in the browser root trust stores. Which is exactly what's defined in this CAB vote as well.

Why would browsers "most likely" enforce this change for internal CAs as well?

ryao 4 days ago | parent | prev | next [-]

Why would they? The old certificates will expire and the new ones will have short lifespans. Web browsers do not need to do anything.

That said, it would be really nice if they supported DANE so that websites do not need CAs.

nickf 4 days ago | parent | prev [-]

'Most likely' - with the exception of Apple enforcing 825-day maximum for private/internal CAs, this change isn't going to affect those internal certificates.

jiggawatts 3 days ago | parent | prev | next [-]

I just got a flashback to trying to automate the certificate issuance process for some ESRI ArcGIS product that used an RPC configuration API over HTTPS to change the certificate.

So yes, you had to first ignore the invalid self-signed certificate while using HTTPS with a client tool that really, really didn't want to ignore the validity issue, then upload a valid certificate, restart the service... which would terminate the HTTPS connection with an error breaking your script in a different not-fun way, and then reconnect... at some unspecified time later to continue the configuration.

Fun times...

rsstack 4 days ago | parent | prev | next [-]

> I've seen most of them moving to internally signed certs

Isn't this a good default? No network access, no need for a public certificate, no need for a certificate that might be mistakenly trusted by a public (non-malicious) device, no need for a public log for the issued certificate.

pavon 4 days ago | parent [-]

Yes, but it is a lot more work to run an internal CA and distribute that CA cert to all the corporate clients. In the past getting a public wildcard cert was the path of least resistance for internal sites - no network access needed, and you aren't leaking much info into the public log. That is changing now, and like you said it is probably a change for the better.

pkaye 4 days ago | parent | next [-]

What about something like step-ca? I got the free version working easily on my home network.

https://smallstep.com/docs/step-ca/

simiones 3 days ago | parent [-]

Not everything that's easy to do on a home network is easy to do on a corporate network. The biggest problem with corporate CAs is how to emit new certificates for a new device in a secure way, a problem which simply doesn't exist on a home network where you have one or at most a handful of people needing new certs to be emitted.

bravetraveler 4 days ago | parent | prev [-]

> A lot more work

'ipa-client-install' for those so motivated. Certificates are literally one among many things part of your domain services.

If you're at the scale past what IPA/your domain can manage, well, c'est la vie.

Spivak 4 days ago | parent [-]

I think you're being generous if you think the average "cloud native" company is joining their servers to a domain at all. They've certainly fallen out of fashion in favor of the servers being dumb and user access being mediated by an outside system.

bravetraveler 3 days ago | parent | next [-]

Why not? The actual clouds do.

I think folks are being facetious wanting more for 'free'. The solutions have been available for literal decades, I was deliberate in my choice.

Not the average, certainly the majority where I've worked. There are at least two well-known Clouds that enroll their hypervisors to a domain. I'll let you guess which.

My point is, the difficulty is chosen... and 'No choice is a choice'. I don't care which, that's not my concern. The domain is one of those external things you can choose. Not just some VC toy. I won't stop you.

The devices are already managed; you've deployed them to your fleet.

No need to be so generous to their feigned incompetence. Want an internal CA? Managing that's the price. Good news: they buy!

Don't complain to me about 'your' choices. Self-selected problem if I've heard one.

Aside from all of this, if your org is being hung up on enrollment... I'm not sure you're ready for key management. Or the other work being a CA actually requires.

Yes, it's more work. Such is life and adding requirements. Trends - again, for decades - show organizations are generally able to manage with something.

Literal Clouds do this, why can't 'you'?

Spivak 3 days ago | parent [-]

Adding machines to a domain is far far more common on bare-metal deployments which is why I said "cloud native." Adding a bunch of cloud VMs to a domain is not very common in my experience because they're designed to be ephemeral and thrown away and IPA being stateful isn't about that.

You're managing your machine deployments with something so of course you just use that that to include your cert which isn't particularly hard but there's a long-tail of annoying work when dealing with containers and vms you aren't building yourself like k8s node pools. It can be done but it's usually less effort to just get public certs for everything.

bravetraveler 3 days ago | parent [-]

To be honest, with "cloud-init" and the ability for SSSD to send record updates, I could make a worthwhile cloudy deployment

To your point, people don't, but it's a perfectly viable path.

Containers/kubernetes, that's pipeline city, baby!

4 days ago | parent | prev [-]
[deleted]
maccard 3 days ago | parent | prev | next [-]

I’ve unfortunately seen the opposite - internal apps are now back to being deployed over VPN and HTTP

tomjen3 3 days ago | parent | prev | next [-]

I would love to do that for my homelab, but not all docker containers trust root certs from the system so getting it right would have been a bigger challenge than dns hacking to get a valid certificate for something that can’t be accessed from outside the network.

I am not willing to give credentials to alter my dns to a program. A security issue there would be too much risk.

xienze 4 days ago | parent | prev | next [-]

> but internal services with have internal CA signed certs with long expire times because of the number of crappy apps that make using certs a pain in the ass.

Introduce them to the idea of having something like Caddy sit in front of apps solely for the purpose of doing TLS termination... Caddy et al can update the certs automatically.

pixl97 4 days ago | parent | next [-]

Unless they are web/tech companies they aren't doing that. Banks, finance, large manufacturing are all terminating at F5's and AVI's. I'm pretty sure those update certs just fine, but it's not really what I do these days so I don't have a direct answer.

xienze 4 days ago | parent | next [-]

Sure. The point is, don't bother letting the apps themselves do TLS termination. Too much work that's better handled by something else.

hedora 4 days ago | parent [-]

Also, moving termination off the endpoint server makes it much easier for three letter agencies to intercept + log.

qmarchi 3 days ago | parent [-]

Most responsible orgs do TLS termination on the public side of a connection, but will still make a backend connection protected by TLS, just with a internal CA.

tikkabhuna 3 days ago | parent | prev [-]

F5s don't support ACME, which has been a pain for us.

xorcist 2 days ago | parent | next [-]

F5 sells expensive boxes intended for larger installations where you can afford not to do ACME in the external facing systems.

Giving the TLS endpoint itself the authority to manage certificates kind of weakens the usefulness of rotating certificates in the first place. You probably don't let your external facing authoritative DNS servers near zone key material, so there's no reason to let the external load balancers rotate certificates.

Where I have used F5 there was never any problem letting the backend configuration system do the rotation and upload of certificates together with every other piece of configuration that is needed for day to day operations.

cpach 3 days ago | parent | prev | next [-]

It might be possible to run an ACME client on another host in your environment. (IMHO, the DNS-01 challenge is very useful for this.) Then you can (probably) transfer the cert+key to BIG IP, and activate it, via the REST API.

I haven’t used BIG IP in a long while, so take this with a grain of salt, but it seems to me that it might not be impossible to get something going – despite the fact that BIG IP itself doesn’t have native support for ACME.

Two pointers that might be of interest:

https://community.f5.com/discussions/technicalforum/upload-l...

https://clouddocs.f5.com/api/icontrol-rest/APIRef_tm_sys_cry...

dijit 3 days ago | parent [-]

Sounds suspiciously similar to a rube goldberg machine.

Those tend to be quite brittle in reality. What’s the old adage about engineering vs architecture again?

Something like this I think: https://www.reddit.com/r/PeterExplainsTheJoke/comments/16141...

cpach 3 days ago | parent [-]

Obviously it would be much better if BIG IP had native support for ACME. And F5 might implement it some day, but I wouldn’t hold my breath.

For some companies, it might be worth it to throw away a $100000 device and buy something better. For others it might not be worth it.

EvanAnderson 3 days ago | parent | prev | next [-]

Exactly. According to posters here you should just throw them away and buy hardware from a vendor who does. >sigh<

Don't expect firmware / software updates to enable ACME-type functionality for tons of gear. At best it'll be treated as an excuse by vendors to make Customers forklift and replace otherwise working gear.

Corporate hardware lifecycles are longer than the proposed timeline for these changes. This feels like an ill thought-out initiative by bureaucrats working in companies who build their own infrastructure (in their white towers). Meanwhile, we plebs who work in less-than-Fortune 500 companies stuck with off-the-shelf solutions will be forced to suffer.

JackSlateur 3 days ago | parent | prev [-]

F5 is the pain.

cryptonym 4 days ago | parent | prev [-]

You now have to build and self-shot a complete CA/PKI.

Or request a certificate over the public internet, for an internal service. Your hostname must be exposed to the web and will be publicly visible in transparency reports.

mox1 4 days ago | parent | next [-]

Companies have software to manage this for you. We utilize https://www.cyberark.com/products/machine-identity-security/

stackskipton 4 days ago | parent | prev | next [-]

You could always ask for wildcard for internal subdomain and use that instead so you will leak your internal FQDN but not individual hosts.

pixl97 4 days ago | parent [-]

I'm pretty sure every bank will auto fail wildcard certs these days, at least the ones I've worked with.

Key loss on one of those is like a takeover of an entire chunk of hostnames. Really opens you up.

JoshTriplett 3 days ago | parent | prev [-]

> Or request a certificate over the public internet, for an internal service. Your hostname must be exposed to the web and will be publicly visible in transparency reports.

That doesn't seem like the end of the world. It means you shouldn't have `secret-plans-for-world-takeover.example.com`, but it's already the case that secret projects should use opaque codenames. Most internal domain names would not actually leak any information of value.

lokar 3 days ago | parent | prev | next [-]

I’ve always felt a major benefit of an internal CA is making it easy to have very sort TTLs

SoftTalker 3 days ago | parent | next [-]

Or very long ones. I often generate 10 year certs because then I don't have to worry about renewing them for the lifetime of the hardware.

lokar 3 days ago | parent [-]

In a production environment with customer data?

SoftTalker 3 days ago | parent [-]

No for internal stuff.

formerly_proven 3 days ago | parent | prev [-]

I'm surprised there is no authorization-certificate-based challenge type for ACME yet. That would make ACME practical to use in microsegmented networks.

The closest thing is maybe described (but not shown) in these posts: https://blog.daknob.net/workload-mtls-with-acme/ https://blog.daknob.net/acme-end-user-client-certificates/

benburkert 3 days ago | parent | next [-]

It's 100% possible today to get certs in segmented networks without a new ACME challenge type: https://anchor.dev/docs/public-certs/acme-relay

(disclamer: i'm a founder at anchor.dev)

webprofusion 3 days ago | parent [-]

Does your hosted service know the private keys or are they all on the client?

benburkert 2 days ago | parent [-]

No, they stay on the client, our service only has access to the CSR. From our docs:

> The CSR relayed through Anchor does not contain secret information. Anchor never sees the private key material for your certificates.

bigp3t3 3 days ago | parent | prev [-]

I'd set that up the second it becomes available if it were a standard protocol. Just went through setting up internal certs on my switches -- it was a chore to say the least! With a Cert Template on our internal CA (windows), at least we can automate things well enough!

formerly_proven 3 days ago | parent [-]

Yeah it's almost weird it doesn't seem to exist, at least publicly. My megacorp created their own protocol for this purpose (though it might actually predate ACME, I'm not sure), and a bunch of in-house people and suppliers created the necessary middlewares to integrate it into stuff like cert-manager and such (basically everything that needs a TLS certificate and is deployed more than thrice). I imagine many larger companies have very similar things, with the only material difference being different organizational OIDs for the proprietary extension fields (I found it quite cute when I learned that the corp created a very neat subtree beneath its organization OID).

Pxtl 3 days ago | parent | prev | next [-]

At this point I wish we could just get all our clients to say "self-signed is fine if you're connecting to a .LOCAL domain name". https is intrinsically useful over raw http, but the overhead of setting up centralized certs for non-public domains is just dumb.

Give us a big global *.local cert we can all cheat with, so I don't have to blast my credentials in the clear when I log into my router's admin page.

shlant 3 days ago | parent | prev | next [-]

this is exactly what I do because mongo and TLS is enough of a headache. I am not dealing with rotating certificates regularly on top of that for endpoints not exposed to the internet.

SoftTalker 3 days ago | parent [-]

Yep letsencrypt is great for public-facing web servers but for stuff that isn't a web server or doesn't allow outside queries none of that "easy" automation works.

procaryote 3 days ago | parent | next [-]

Acme dns challenge works for things that aren't webservers.

For the other case perhaps renew the cert at a host allowed to do outside queries for the dns challenge and find some acceptable automated way to propagate an updated cert to the host that isn't allowed outside queries.

Yeroc 3 days ago | parent | next [-]

Last time I checked there's no standardized API/protocol to deal with populating the required TXT records on the DNS side. This is all fine if you've out-sourced your DNS services to one of the big players with a supported API but if you're running your own DNS services then doing automation against that is likely not going to be so easy!

icedchai 3 days ago | parent | next [-]

I run my own DNS servers (BIND 9.x) and use an rfc2136 plugin to handle TXT records. It works fine. See https://cert-manager.io/docs/configuration/acme/dns01/rfc213...

procaryote 2 days ago | parent | prev [-]

One pretty easy way to do it while running your own DNS is to put the zone files, or some input that you can build to zone files, in version control.

There are lots of systems that allow you to set rules for what is required to merge a PR, so if you want "the tests pass, it's a TXT record, the author is whitelisted to change that record" or something, it's very achievable

SoftTalker 3 days ago | parent | prev [-]

I don't have an API or any permission to add TXT records to my DNS. That's a support ticket and has about a 24-hour turnaround best case.

Yeroc 3 days ago | parent | next [-]

I was just digging into this a bit and discovered ACME supports a something called DNS alias mode (https://github.com/acmesh-official/acme.sh/wiki/DNS-alias-mo...) which allows you to add a static DNS TXT record on your core domain that delegates to a second domain. This would allow you to setup a second domain with DNS API (if permitted by company policy!)

immibis 3 days ago | parent | prev | next [-]

Is this just because your DNS is with some provider, or is it something that leads from your organizational structure?

If it's just because your DNS is at a provider, you should be aware that it's possible to self-host DNS.

SoftTalker 3 days ago | parent [-]

It’s internal policy. We do run our own DNS.

procaryote 2 days ago | parent [-]

But that's pretty much self-inflicted damage.

JackSlateur 3 days ago | parent | prev | next [-]

You have people paid to create DNS records ? Haha

dijit 3 days ago | parent | next [-]

its’ not practical to give everyone write access to the google.com root zone.

Someone will fuck up accidentally, so production zones are usually gated somehow, sometimes with humans instead of pure automata.

JackSlateur 3 days ago | parent [-]

Why not ?

Giving write access does not mean giving unrestricted write access

Also, another way (which I built in a previous compagny) is to create a simple certificate provider (API or whatever), integrated with whatever internal authentication scheme you are using, and are able to sign csr for you. A LE proxy, as you might call it

SoftTalker 3 days ago | parent | prev [-]

Yes we do. That’s not the only thing they do of course.

xorcist 2 days ago | parent [-]

It also sounds like the right people to handle certificate issuance?

If you are not in a good position in the internal organization to control DNS, you probably shouldn't handle certificate issuance either. It makes sense to have a specific part of the organization responsible.

procaryote 3 days ago | parent | prev [-]

That's not great, sorry to hear

bsder 3 days ago | parent | prev [-]

And may the devil help you if you do something wrong and accidentally trip LetsEncrypt's rate limiting.

You can do nothing except twiddle your thumbs while it times out and that may take a couple of days.

JackSlateur 3 days ago | parent | prev [-]

Haa, yes ! We have that, too ! Accepted warning in browsers ! curl -k ! verify=False ! Glorious future to the hacking industry !