Once we cross the threshold of "I absolutely have to automate everything or it's not viable to use TLS anymore", why do we care about providing anything beyond ~48 hours? I am willing to bet money this threshold will never be crossed.
This feels like much more of an ideological mission than a practical one, unless I've missed some monetary/power advantage to forcing everyone to play musical chairs with their entire infra once a month...
Let's Encrypt has always self-imposed a 90 day limit, though of course with this ballot passing we will now have to reduce that under 47 days in the future.
Shorter lifetimes have several advantages:
1. Reduced pressure on the revocation system. For example, if a domain changes hands, then any previous certificates spend less time in the revoked state. That makes CRLs smaller, a win for everyone involved.
2. Reduced risk for certificates which aren't revoked but should have been, perhaps because a domain holder didn't know that a previous holder of that domain had it, or an attack of any sort that led to a certificate being issued that wasn't desired.
3. For fully short-lived certs (under 7 days), many user-agents don't do revocation checks at all, because that's a similar timeline to our existing revocation technology taking effect. This is a performance win for websites/user-agents. While we advocate for full certificate automation, I recognize there are cases where that's not so easy, and doing a monthly renewal may be much more tractable.
Going to shorter than a few days is a reliability and scale risk. One of the biggest issues with scale today is that Certificate Transparency logs, while providing great visibility into what certs exist (see points 1 and 2), will have to scale up significantly as lifetimes are cut.
Why is this happening now, though? I can't speak for everyone, and this is only my own opinion on what I'm observing, but: One big industry problem that's been going on for the last year or two is that CAs have found themselves in situations where they need to revoke certificates because of issues with those certificates, but customers aren't able to respond on an appropriate timeline. So the big motivation for a lot of the parties here is to get these timelines down and really prove a push towards automation.
There are security benefits, yes. But as someone that works in infrastructure management, including on 25 or 30 year old systems in some cases, it's very difficult to not find this frustrating. I need tools I will have in 10 years to still be able to manage systems that were implemented 15 years ago. That's reality.
Doubtless people here have connected to their router's web interface using the gateway IP address and been annoyed that the web browser complains so much about either insecure HTTP or an unverified TLS certificate. The Internet is an important part of computer security, but it's not the only part of computer security.
I wish technical groups would invest some time in real solutions for long-term, limited access systems which operate for decades at a time without 24/7 access to the Internet. Part of the reason infrastructure feels like running Java v1.3 on Windows 98 is because it's so widely ignored.
The crazy thing? There is already two WiFi QR code standards, but they do not include the CA cert. There's a "Wi-Fi Easy Connect" standard that is intended to secure the network for an enterprise, and there's a random Java QR code library that made their own standard for just encoding an access point and WPA shared key (and Android and iOS both adopted it, so now it's a de-facto standard).
End-user security wasn't a consideration for either of them. With the former they only cared about protecting the enterprise network, and with the latter they just wanted to make it easier to get onto a non-Enterprise network. The user still has to fend for themselves once they're on the network.
It might be easier to extend the URL format with support for certificate fingerprints. It would only require support in web browsers, which are updated much faster than operating systems. It could also be made in a backwards compatible way, for example by extending the username syntax. That way old browsers would continue to show the warning and new browsers would accept the self signed URL format in a secure way.
They only stopped using global default passwords because people were being visibly compromised on the scale of millions at a time.
Now for issuing certs to devices like your router, there’s a registration process where the device generates a key and requests a cert from the CA, presenting its public key. It requests a cert with a local name like “router.local”. No cert is issued but the CA displays a message on its front panel asking if you want to associate router.local with the displayed pubkey fingerprint. Once you confirm, the device can obtain and auto renew the cert indefinitely using that same public key.
Now on your computer, you can hit local https endpoints by name and get TLS with no warnings. In an ideal world you’d get devices to adopt a little friendly UX for choosing their network name and showing the pubkey to the user, as well as discovering the CA (maybe integrate with dhcp), but to start off you’d definitely have to do some weird hacks.
It also helps that I know exactly how easy it is to build this type of infrastructure because I have built it professionally twice.
Training users to click the scary “trust this self-signed certificate once/always” button won’t end well.
Yes, it's possible that the system is compromised and it's redirecting all traffic to a local proxy and that it's also malicious.
It's still absurd to think that the web browser needs to make the user jump through the same hoops because of that exceptional case, while having the same user experience as if you just connected to https://bankofamerica.com/ and the TLS cert isn't trusted. The program should be smarter than that, even if it's a "local network only" mode.
That's their "solution".
Such a certificate should not be trusted for domain verification purposes, even though it should match the domain. Instead it should be trusted for encryption / stream integrity purposes. It should be accepted on IPs outside of publicly routable space, like 192.0.0/24, or link-local IPv6 addresses. It should be possible to issue it for TLDs like .local. It should result in a usual invalid certificate warning if served off a pubic internet address.
In other words, it should be handled a bit like a self-signed certificate, only without the hassle of adding your handcrafted CA to every browser / OS.
Of course it would only make sense if a major browser would trust this special CA in its browser by default. That is, Google is in a position to introduce it. I wonder if they may have any incentive though. (To say nothing of Apple.)
So in a way, a certificate the device generates and self-signs would actually be better, since at least the private key stays on the device and isn’t shared.
The private key of course stays within the device, or anywhere the certificate is generated. The idea is that the CA from which the certificate is derived is already trusted by the browser, in a special way.
Then why use encryption at all when your threat model for encrypted communication can't handle a malicious actor on the network?
(Though getting the browser to just assume http to local domains is secure like it already does for http://localhost would solve that)
Old cruft dying there for decades
That's the reality and that's an issue unrelated to TLS
Running unmanaged compute at home (or elsewhere ..) is the issue here.
Practically, the solution is virtual machines with the compatible software you'll need to manage those older devices 10 years in the future, or run a secure proxy for them.
Internet routers are definitely one of the worst offenders because originating a root of trust between disparate devices is actually a hard problem, especially over a public channel like wifi. Generally, I'd say the correct answer to this is that wifi router manufacturers need to maintain secure infrastructure for enrolling their devices. If manufacturers can't bother to maintain this kind of infrastructure then they almost certainly won't be providing security updates in firmware either, so they're a poor choice for an Internet router.
I think that's a big win.
The root reason is that revocation is broken, and we need to do better to get the security properties we demand of the Web PKI.
It might in theory but I suspect it's going to make things very very unreliable for quite a while before it (hopefully) gets better. I think probably already a double digit fraction of our infrastructure outages are due to expired certificates.
And because of that it may well tip a whole class of uses back to completely insecure connections because TLS is just "too hard". So I am not sure if it will achieve the "more secure" bit either.
And as mentioned in other comments, the revocation system doesn't really work, and reducing the validity time of certs reduces the risks there.
Unfortunately, there isn't really a good solution for many embedded and local network cases. I think ideally there would be an easy way to add a CA that is trusted for a specific domain, or local ip address, then the device can generate its own certs from a local ca. And/or add trust for a self-signed cert with a longer lifetime.
The answer is local acme with your router issuing certs for your ULA prefix or “home zone” domain.
Good example of enshittification
> easier for a few big players in industry
Not necessarily. OP mentions, more certs would mean bigger CT logs. More frequent renewals mean more load. Like with everything else, this seems like a trade-off. Unfortunately, for you & I, as customers of cert authorities, 47 days is where the now the agreed cut-off is (not 42).
What will let's encrypt be like with 7day certs? Will it renew them every day(6 day reaction time), or every 3 days (4 days reaction time). Not every org is suited with 24/7 staffing, some people go on holidays, some public holidays extend to long weekends etc :). I would argue that it would be a good idea to give people a full week to react to renewal problems. That seems impossible for short lived certs.
Let's Encrypt was founded with a goal of rapidly (within a few years) helping get the web to as close to 100% encrypted as we could. And we've succeeded.
I don't think we could have achieved that goal any way other than being a CA.
It is also true that these contemporary prevention methods only help the largest companies which can afford to do things like distributing key material with end user software. It does not help you and me (unless you have outsourced your security to Google already, in which case there is the obvious second hand benefit). Registrars could absolutely help a much wider use of these preventions.
There is no technical reason we don't have this, but this is one area where the interest of largest companies with huge influence over standards and security companies with important agencies as customers all align, so the status quo is very slow to change. If you squint you can see traces of this discussion all the way from IPng to TLS extensions, but right now there is no momentum for change.
Unfortunately, when you're working at global scale, you generally need to be well-capitalized, so it's big companies that get all the experience with what does and doesn't work. And then it's opinionated message board nerds like us that provide the narratives.
Thinking deeper about it: Verisign (now Symantec) must have some insanely good security, because every black hat nation state actor would love to break into on their cert issuance servers and export a bunch of legit signed certs to run man-in-the-middle attacks against major email providers. (I'm pretty sure this already happened in Netherlands.)
I might be misremembering but I thought one insight from the Snowden documents was that a certain three-letter agency had already accomplished that?
Here is a nice writeup for that breach: https://www.securityweek.com/hacker-had-total-control-over-d...
Or is DNSSEC required for DV issuance? If it is, then we already rely on a trustworthy TLD.
I'm not saying there isn't some benefit in the implicit key mgmt oversight of CAs, but as an alternative to DV certs, just putting a pubkey in dnssec seems like a low effort win.
It's been a long time since I've done much of this though, so take my gut feeling with a grain of salt.
And what do certificate buyers gain? The ability for their site to be revoked or expired and thus no longer work.
I’d like to corrected.
A certificate is evidence that the server you're connected to has a secret that was also possessed by the server that the certificate authority connected to. This means that whether or not you're subject to MITMs, at least you don't seem to be getting MITMed right now.
The importance of certificates is quite clear if you were around on the web in the last days before universal HTTPS became a thing. You would connect to the internet, and you would somehow notice that the ISP you're connected to had modified the website you're accessing.
Is that actually true? I mean, obviously CAs aren't validating DNS challenges over coffee shop Wi-Fi so it's probably less likely to be MITMd than your laptop, but I don't think the BRs require any special precautions to assure that the CA's ISP isn't being MITMd, do they?
Nobody has really had to pay for certificates for quite a number of years.
What certificates get you, as both a website owner and user, is security against man-in-the-middle attacks, which would otherwise be quite trivial, and which would completely defeat the purpose of using encryption.
I find it hard to believe there is no way to secure without requiring an authority in the middle.
DANE is the way (https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...)
But no browser have support for it, so .. :/
Further, all you've done is replace one authority (the CA authority) with another one (the zone authority, and thus your domain registrar and the domain registry).
When I manage a DNS zone, I'm free to generate all certificates I want
Are there any reasonable alternatives to CAs in a modern world? I have never heard any good proposals.
Certificate pinning is probably the most widely known way to get a certificate out there without relying on live PKI. However, certificate pinning just shifts the burden of trust from runtime to install time, and puts an expiration date on every build of the program. It also doesn't work for any software that is meant to access more than a small handful of pre-determined sites.
Web-of-trust is a theoretical possibility, and is used for PGP-signed e-mail, but it's also a total mess that doesn't scale. Heck, the best way to check the PGP keys for a lot of signed mail is to go to an HTTPS website and thus rely on the CAs.
DNSSEC could be the basis for a CA-free world, but it hasn't achieved wide use. Also, if used in this way, it would just shift the burden of trust from CAs to DNS operators, and I'm not sure people really like those much better.
You can instead pin a self-signed or private CA-signed certificate, and then it can have the maximum lifetime you're comfortable with and that the software supports. A related option is to ship your app with a copy of your private CA certificate(s) and configure the HTTPS client to trust those in addition to, or instead of, the system-provided CAs.
I'm not sure how viable these approaches are on more locked-down platforms (like smartphones) and, even if they are viable today, whether they will remain viable in the future. It's also only good for full apps; anything that uses the system browser has to stick with the system CAs.
I work in a very large organisation and I just dont see them being able to go to automated TLS certificates for their self managed subdomains, inspection certificates, or anything else for that matter. It will be interesting to see how the short lived certs are adopted into the future.
[1]: https://en.wikipedia.org/wiki/Heartbleed#Certificate_renewal...
We did consider it.
As CAs prepare for post-quantum in the next few years, it will become even less practical as there is going to be pressure to cut down the number of signatures in a handshake.
That is unfortunate. I just deployed a web server the other day and was thrilled to deploy must-staple from Let's Encrypt, only to read that it was going away.
> As CAs prepare for post-quantum in the next few years, it will become even less practical as there is going to be pressure to cut down the number of signatures in a handshake.
Please delay the adoption of PQAs for certificate signatures at Let's Encrypt as long as possible. I understand the concern that a hypothetical quantum machine with tens of millions of qubits capable of running Shor's algorithm to break RSA and ECC keys might be constructed. However, "post-quantum" algorithms are inferior to classical cryptographic algorithms in just about every metric as long as such machines do not exist. That is why they were not even considered when the existing RSA and ECDSA algorithms were selected before Shor's algorithm was a concern. There is also a real risk that they contain undiscovered catastrophic flaws that will be found only after adoption, since we do not understand their hardness assumptions as well as we understand integer factorization and the discrete logarithm problem. This has already happened with SIKE and it is possible that similarly catastrophic flaws will eventually be found in others.
Perfect forward secrecy and short certificate expiry allow CAs to delay the adoption of PQAs for key signing until the creation of a quantum computer capable of running Shor’s algorithm on ECC/RSA key sizes is much closer. As long as certificates expire before such a machine exists, PFS ensures no risk to users, assuming key agreement algorithms are secured. Hybrid schemes are already being adopted to do that. There is no quantum moore's law that makes it a forgone conclusion that a quantum machine that can use Shor's algorithm to break modern ECC and RSA will be created. If such a machine is never made (due to sheer difficulty of constructing one), early adoption in key signature algorithms would make everyone suffer from the use of objectively inferior algorithms for no actual benefit.
If the size of key signatures with post quantum key signing had been a motivation for the decision to drop support for OCSP must-staple and my suggestion that adoption of post quantum key signing be delayed as long as possible is in any way persuasive, perhaps that could be revisited?
Finally, thank you for everything you guys do at Let's Encrypt. It is much appreciated.
So for a bank, a private cert compromise is bad, for a regular low traffic website, probably not so much?
Sounds like your concept of the customer/provider relationship is inverted here.
The whole "customer is king" doesn't apply to something as critical as PKI infrastructure, because it would compromise the safety of the entire internet. Any CA not properly applying the rules will be removed from the trust stores, so there can be no exceptions for companies who believe they are too important to adhere to the contract they signed.
And if the safety of the entire internet is at risk, why is 47 days days an acceptable duration for this extreme risk, but 90 days is not?
LOL. old-fashioned enterprises are the worst at "oh, no, can't do that, need months of warning to change something!", while also handling critical data. A major event in the CA space last year was a health-care company getting a court order against a CA to not revoke a cert that according to the rules for CAs the CA had to revoke (in the end they got a few days extension, everyone grumbled and the CA got told to please write their customer contracts more clearly, but the idea is out there and nobody likes CAs doing things they are not supposed to, even if through external force).
One way to nip that in the bud is making sure even you get your court order preventing the CA from doing the right thing, your certificate will expire soon anyways, so "we are too important to have working IT processes" doesn't work anymore.
News report: https://www.heise.de/en/news/DigiCert-Customer-seeks-to-exch...
nitty-gritty bugzilla: https://bugzilla.mozilla.org/show_bug.cgi?id=1910322#c8
some follow-on drama: https://news.ycombinator.com/item?id=43167087
> Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.
Shorter-lived certificates make OCSP and other revocation mechanisms less of a load-bearing component within the Web PKI. This is a good thing, since neither CAs nor browsers have managed to make timely revocation methods scale well.
(I don't think there's any monetary or power advantage to doing this. The reason to do it is because shorter lifetimes make it harder for server operators to normalize deviant certificate operation practices. The reasoning there is the same as with backups or any other period operational task: critical processes must be continually tested and evaluated for correctness.)
That is the next step in nation state tapping of the internet.
A MITM cert would need to be manually trusted, which is a completely different thing.
This is already the case; CT doesn't rely on your specific served cert being comparable with others, but all certs for a domain being monitorable and auditable.
(This does, however, point to a current problem: more companies should be monitoring CT than are currently.)
the power dynamic here is that the CAs have a "too big to fail" inertia, where they can do bad things without consequence because revoking their trust causes too much inconvenience for too many people. shortening expiry timeframes to the point where all their certificates are always going to expire soon anyways reduces the harm that any one CA can do by offering bad certs.
it might be inconvenient for you to switch your systems to accomodate shorter expiries, but it's better to confront that inconvenience up front than for it to be in response to a security incident.
Well you see, they also want to be able to break your automation.
For example, maybe your automation generates a 1024 bit RSA certificate, and they've decided that 2048 bit certificates are the new minimum. That means your automation stops working until you fix it.
Doing this with 2-day expiry would be unpopular as the weekend is 2 days long and a lot of people in tech only work 5 days a week.
This is a ridiculous straw man.
> 48 hours. I am willing to bet money this threshold will never be crossed.
That's because it won't be crossed and nobody serious thinks it should.
Short certs are better, but there are trade-offs. For example, if cert infra goes down over the weekend, it would really suck. TBH, from a security perspective, something in the range of a couple of minutes would be ideal, but that runs up against practical reasons
- cert transparency logs and other logging would need to be substantially scaled up
- for the sake of everyone on-call, you really don't want anything shorter than a reasonable amount of time for a human to respond
- this would cause issues with some HTTP3 performance enhancing features
- thousands of servers hitting a CA creates load that outweighs the benefit of ultra short certs (which have diminishing returns once you're under a few days, anyways)
> This feels like much more of an ideological mission than a practical one
There are numerous practical reasons, as mentioned here by many other people.
Resisting this without good cause, like you have, is more ideological at this point.
It's been a huge pain as we have encountered a ton of bugs and missing features in libraries and applications to reload certs like this. And we have some really ugly workarounds in place, because some applications place a "reload a consul client" on the same level of "reload all config, including opening new sockets, adjusting socket parameters, doing TCP connection handover" - all to rebuild a stateless client throwing a few parameters at a standard http client. But oh well.
But I refuse to back down. Reload your certs and your secrets. If we encounter a situation in which we have to mass-revoke and mass-reissue internal certs, it'll be easy for those who do. I don't have time for everyone else.
Do you promise to come back and tell us the story about when someone went on vacation and the certs issued on a Thursday didn't renew over the weekend and come Monday everything broke and no one could authenticate or get into the building?
I applaud you for sticking to your guns though.
And the conventional wisdom for application management and deployments is - if it's painful, do it more. Like this, applications in the container infrastructure are forced to get certificate deployment and reloading right on day 1.
And yes, some older application that were migrated to the infrastructure went ahead and loaded their credentials and certificates for other dependencies into their database or something like that and then ended up confused when this didn't work at all. Now it's fixed.
Except there are no APIs to rotate those. The infrastructure doesn't exist yet.
And refreshing those automatically does not validate ownership, unlike certificates where you can do a DNS check or an HTTP check.
Microsoft has some technology that next to these tokens they also have a per-machine certificate that is used to sign requests, and those certificates can't leave the machine.
Because we run on Azure / AKS, switching to federated credentials ("workload identities") with the app registrations made most of the pain go away because MS manages all the rotations (3 months) etc. If you're on managed AKS the OIDC issuer side is also automagic. And it's free. I think GCP offers something similar.
https://learn.microsoft.com/en-us/entra/workload-id/workload...
Individuals are in the same boat: if you're running your own custom services at your house, you've self-identified as being in the amazingly small fraction of the population with both the technical literacy and desire to do so. Either set up LetsEncrypt or run your own ACME service; the CAB is making clear here and in prior changes that they're not letting the 1% hold back the security bar for everybody else.
In the hackiest of setups, it's a few commands to generate a CA and issue a wildcard certificate for everything. Then a single line in the bootstrap script or documentation for new devices to trust the CA and you're done.
Going a few steps further, setting up something like Hashicorp Vault is not hard and regardless of org size; you need to do secret distribution somehow.
My dad still calls my terminals a "DOS window" and doesn't understand why I don't use GUIs like a normal person. He has his own business. He absolutely cannot just roll out a CA for secure comms with his local printer or whatever. He literally calls me to help with buying a PDF reader
Myself, I'm employed at a small business and we're all as tech savvy as it gets. It took me several days to set it up on secure hardware (smartcard, figuring out compatibility and broken documentation), making sure I understand what all the options do and that it's secure for years to come and whatnot, working out what the procedure for issuing should be, etc. Eventually got it done, handed it over to the higher-up who gets to issue certs, distribute the CA cert to everyone... it's never used. We have a wiki page with TLS and SSH fingerprints
This is fair. I assumed all small businesses would be tech startups, haha.
Paying experts (Ed: setting up internal infrastructure) is a perfectly viable option so the only real question is the amount of effort involved not if random people know how to do something.
Congrats for securing your job by selling the free internet and your soul.
If someone doesn’t want to learn then nobody needs to help them for free.
Perhaps spend some time outside your bubble? I’ve read many of your comments and you just do seem to be caught in your own little world. “Out of touch” is apt and you should probably reflect on that at length.
If we’re talking about businesses hosting services on some intranet and concerned about TLS, then yes, I assume it’s either a tech company or they have at least one competent engineer to host these things. Why else would the question be relevant?
> “Out of touch” is apt and you should probably reflect on that at length.
That’s a very weird personal comment based on a few comments on a website that’s inside a tech savvy bubble. Most people here work in IT, so I talk as if most people here work in IT. If you’re a mechanic at a garage or a lawyer at a law firm, I wouldn’t tell you rolling your own CA is easy and just a few commands.
Not to mention the massive undertaking that even just maintaining a multi-platform chromium fork is.
The nightmare only intensifies for small businesses that allow their users to bring their own devices (yes, yes, sacrilege but that is how small businesses operate).
Not everything is a massive enterprise with an army of IT support personnel.
Traefik (by default) will attempt certificate renewal 30 days before expiry. Perhaps the defaults will change if the lifetime becomes 45 days. I don't think it's possible to override this value, without adjusting the certificate expiry days, but I've never felt the need to adjust it.
For internal web services I could use just Let's Encrypt but I need to deploy the client certs anyways for network access and I might as well just use my internal cert for everything.
Or cheat and use tailscale to do the whole thing.
Having just implemented an internal CA, I can assure you, most corporations can’t just run an internal CA. Some struggle to update containers and tie their shoe laces.
I’ve done it in the past, and it was so painful, I just bit the bullet and started accessing everything under public hostnames so that I can get auto-issued Letsencrypt certificates.
https://www.digicert.com/blog/https-only-features-in-browser...
A static page that hosts documentation on an internal network does not need encryption.
The added overhead of certificate maintenance (and investigating when it does and will break) is simply not worth the added cost.
Of course the workaround most shops do nowadays is just hide the HTTP servers behind a load balancer doing SSL termination with a wildcard cert. An added layer of complexity (and now single point of failure) just to appease the WebPKI crybabies.
Of course, then there are the employees who could just intercept HTTP requests, and modify them to include a payload to root an employee's machine. There is so much software out there that can destroy trust in a network, and it's literally download and install, then point and click with no knowledge. Seems like there is a market for simple and cheap solutions for internal networks, for small business. I could see myself making quite a bit off it, which I did in the mid-2000's, but I can't stand doing sales any more in my life, and dealing with support is a whole issue on it's own even with an automated solution.
Just about every web server these days supports ACME -- some natively, some via scripts, and you can set up your own internal CA using something like step-ca that speaks ACME if you don't want your certs going out to the transparency log.
The last few companies I've worked at had no http behind the scenes -- everything, including service-to-service communications was handled via https. It's a hard requirement for just about everything financial, healthcare, and sensitive these days.
[proceeds to describe a bunch of new infrastructure and automation you need to setup and monitor]
So when ACME breaks - which it will, because it's not foolproof - the server securely hosting the cafeteria menus is now inaccessible, instead of being susceptible to interception or modification in transit. Because the guy that has owned your core switches is most concerned that everyone will be eating taco salad every day.
Someone that has seized control of your core network such that they were capable of modifying traffic, is not going to waste precious time or access modifying the flags of ls on your man page server. They will focus on more valuable things.
Just because something is possible in theory doesn't make it likely or worth the time invested.
You can put 8 locks on the door to your house but most people suffice with just one.
Someone could remove a piece of mail from your unlocked rural mailbox, modify it and put it back. Do you trust the mail carrier as much as the security of your internal network?
But it's not really a concern worth investing resources into for most.
Ah, the "both me and my attackers agree on what's important" fallacy.
What if they modify the man page response to include drive-by malware?
There’s nothing stopping Apple and Google from issuing themselves certificates every 10 minutes. I get no value for doing this. Building out or expanding my own PKI for my company or setting up the infrastructure to integrate with Digicert or whomever gets me zero security and business value, just cost and toil.
Revocation is most often an issue when CAs fuck up. So now we collectively need to pay to cover their rears.
The big question is what happens when (not "if") that happens. Companies have repeatedly shown that they are unable to rotate certs in time, to the point of even suing CAs to avoid revocation. They've been asked nicely to get their shit together, and it hasn't happened. Shortening cert lifetime to force automation is the inevitable next step.
You’re portraying people suing CAs to get injunctions to avoid outages as clueless or irresponsible. The fact is Digicert’s actions, dictated by this CA/Browser forum were draconian and over the top responses to a minor risk. This industry trade group is out of control.
End of the day, we’re just pushing risk around. Running a quality internal PKI is difficult.
Non browser things usually don’t care even if cert is expired or trusted.
So I expect people still to use WebPKI for internal sites.
Why would browsers "most likely" enforce this change for internal CAs as well?
That said, it would be really nice if they supported DANE so that websites do not need CAs.
So yes, you had to first ignore the invalid self-signed certificate while using HTTPS with a client tool that really, really didn't want to ignore the validity issue, then upload a valid certificate, restart the service... which would terminate the HTTPS connection with an error breaking your script in a different not-fun way, and then reconnect... at some unspecified time later to continue the configuration.
Fun times...
Isn't this a good default? No network access, no need for a public certificate, no need for a certificate that might be mistakenly trusted by a public (non-malicious) device, no need for a public log for the issued certificate.
'ipa-client-install' for those so motivated. Certificates are literally one among many things part of your domain services.
If you're at the scale past what IPA/your domain can manage, well, c'est la vie.
I think folks are being facetious wanting more for 'free'. The solutions have been available for literal decades, I was deliberate in my choice.
Not the average, certainly the majority where I've worked. There are at least two well-known Clouds that enroll their hypervisors to a domain. I'll let you guess which.
My point is, the difficulty is chosen... and 'No choice is a choice'. I don't care which, that's not my concern. The domain is one of those external things you can choose. Not just some VC toy. I won't stop you.
The devices are already managed; you've deployed them to your fleet.
No need to be so generous to their feigned incompetence. Want an internal CA? Managing that's the price. Good news: they buy!
Don't complain to me about 'your' choices. Self-selected problem if I've heard one.
Aside from all of this, if your org is being hung up on enrollment... I'm not sure you're ready for key management. Or the other work being a CA actually requires.
Yes, it's more work. Such is life and adding requirements. Trends - again, for decades - show organizations are generally able to manage with something.
Literal Clouds do this, why can't 'you'?
You're managing your machine deployments with something so of course you just use that that to include your cert which isn't particularly hard but there's a long-tail of annoying work when dealing with containers and vms you aren't building yourself like k8s node pools. It can be done but it's usually less effort to just get public certs for everything.
To your point, people don't, but it's a perfectly viable path.
Containers/kubernetes, that's pipeline city, baby!
I am not willing to give credentials to alter my dns to a program. A security issue there would be too much risk.
Introduce them to the idea of having something like Caddy sit in front of apps solely for the purpose of doing TLS termination... Caddy et al can update the certs automatically.
I haven’t used BIG IP in a long while, so take this with a grain of salt, but it seems to me that it might not be impossible to get something going – despite the fact that BIG IP itself doesn’t have native support for ACME.
Two pointers that might be of interest:
https://community.f5.com/discussions/technicalforum/upload-l...
https://clouddocs.f5.com/api/icontrol-rest/APIRef_tm_sys_cry...
Those tend to be quite brittle in reality. What’s the old adage about engineering vs architecture again?
Something like this I think: https://www.reddit.com/r/PeterExplainsTheJoke/comments/16141...
For some companies, it might be worth it to throw away a $100000 device and buy something better. For others it might not be worth it.
Giving the TLS endpoint itself the authority to manage certificates kind of weakens the usefulness of rotating certificates in the first place. You probably don't let your external facing authoritative DNS servers near zone key material, so there's no reason to let the external load balancers rotate certificates.
Where I have used F5 there was never any problem letting the backend configuration system do the rotation and upload of certificates together with every other piece of configuration that is needed for day to day operations.
Don't expect firmware / software updates to enable ACME-type functionality for tons of gear. At best it'll be treated as an excuse by vendors to make Customers forklift and replace otherwise working gear.
Corporate hardware lifecycles are longer than the proposed timeline for these changes. This feels like an ill thought-out initiative by bureaucrats working in companies who build their own infrastructure (in their white towers). Meanwhile, we plebs who work in less-than-Fortune 500 companies stuck with off-the-shelf solutions will be forced to suffer.
Or request a certificate over the public internet, for an internal service. Your hostname must be exposed to the web and will be publicly visible in transparency reports.
Key loss on one of those is like a takeover of an entire chunk of hostnames. Really opens you up.
That doesn't seem like the end of the world. It means you shouldn't have `secret-plans-for-world-takeover.example.com`, but it's already the case that secret projects should use opaque codenames. Most internal domain names would not actually leak any information of value.
The closest thing is maybe described (but not shown) in these posts: https://blog.daknob.net/workload-mtls-with-acme/ https://blog.daknob.net/acme-end-user-client-certificates/
(disclamer: i'm a founder at anchor.dev)
> The CSR relayed through Anchor does not contain secret information. Anchor never sees the private key material for your certificates.
Give us a big global *.local cert we can all cheat with, so I don't have to blast my credentials in the clear when I log into my router's admin page.
For the other case perhaps renew the cert at a host allowed to do outside queries for the dns challenge and find some acceptable automated way to propagate an updated cert to the host that isn't allowed outside queries.
There are lots of systems that allow you to set rules for what is required to merge a PR, so if you want "the tests pass, it's a TXT record, the author is whitelisted to change that record" or something, it's very achievable
If it's just because your DNS is at a provider, you should be aware that it's possible to self-host DNS.
Someone will fuck up accidentally, so production zones are usually gated somehow, sometimes with humans instead of pure automata.
Giving write access does not mean giving unrestricted write access
Also, another way (which I built in a previous compagny) is to create a simple certificate provider (API or whatever), integrated with whatever internal authentication scheme you are using, and are able to sign csr for you. A LE proxy, as you might call it
If you are not in a good position in the internal organization to control DNS, you probably shouldn't handle certificate issuance either. It makes sense to have a specific part of the organization responsible.
You can do nothing except twiddle your thumbs while it times out and that may take a couple of days.
And in term of security, I think that it is a double edged sword:
- everyone will be so used to certificates changing all the time, and no certificate pinning anymore, so the day were China, a company or whoever serve you a fake certificate, you will be less able to notice it
- Instead of having closed systems, readonly, having to connect outside and update only once per year or more to update the certificates, you will have now all machines around the world that will have to allow quasi permanent connections to random certificate servers for the updating the system all the time. If ever Digicert or Letsencrypt server, or the "cert updating client" is rooted or has a security issue, most servers around the world could be compromised in a very very short time.
As a side note, I'm totally laughing at the following explanation in the article:
47 days might seem like an arbitrary number, but it’s a simple cascade:
- 47 days = 1 maximal month (31 days) + 1/2 30-day month (15 days) + 1 day wiggle room
So, 47 is not arbitrary, but 1 month, + 1/2 month, + 1 day are not arbitrary values...I'm a computing professional in the tiny slice of internet users that actually understands what a cert is, and I never look at a cert by hand unless it's one of my own that I'm troubleshooting. I'm sure there are some out there who do (you?), but they're a minority within a minority—the rest of us just rely on the automated systems to do a better job at security than we ever could.
At a certain point it is correct for systems engineers to design around keeping the average-case user more secure even if it means removing a tiny slice of security from the already-very-secure power users.
like, private CA? All of these restrictions are only applied for certificates issued under the webtrust program. Your private CA can still issue 100 year certificates.
Support for cert and CA pinning is in a state that is much better than I thought it will be, at least for mobile apps. I'm impressed by Apple's ATS.
Yet, for instance, you can't pin a CA for any domain, you always have to provide it up front to audit, otherwise your app may not get accepted.
Doesn't this mean that it's not (realistically) possible to create cert pinning for small solutions? Like homelabs or app vendors that are used by onprem clients?
We'll keep abusing PKI for those use cases.
There is a client that has a self hosted web service. Or a SaaS but under his own domain.
There is a vendor that provides nice apps to interact with that service. Vendor distributes them on his own to stores, upgrades etc.
Clients has no interest in doing that, nor any competencies.
Currently there is no solution here: Vendor needs to distribute an app that has Client's CAs or certs built in (into his app realese), to be able to pin it.
I've seen that scenario many times in mid/small-sized banks, insurance and surrounding services. Some of these institutions rely purely on external vendors and just integrate them. Same goes for tech savvy selfhosters - they often rely on third party mobile apps but host backends themselves.
Not related to certificates specifically, and the specific number of days is in no way a security risk, but it reminded me of NUMS generators. If you find this annoyingly arbitrary, you may also enjoy: <https://github.com/veorq/numsgen>. It implements this concept:
> [let's say] one every billion values allows for a backdoor. Then, I may define my constant to be H(x) for some deterministic PRNG H and a seed value x. Then I proceed to enumerate "plausible" seed values x until I find one which implies a backdoorable constant. I can begin by trying out all Bible verses, excerpts of Shakespeare works, historical dates, names of people and places... because for all of them I can build a story which will make the seed value look innocuous
From http://crypto.stackexchange.com/questions/16364/why-do-nothi...
Browser certificate pinning is deprecated since 2018. No current browsers support HPKP.
There are alternatives to pinning, DNS CAA records, monitoring CT logs.
Only if browsers enforce the TLS requirements for private CAs. Usually, browsers exempt user or domain controlled CAs from all kinds of requirements, like certificate transparancy log requirements. I doubt things will be different this time.
If they do decide to apply those limits, you can run an ACME server for your private CA and point certbot or whatever ACME client you prefer at it to renew your internal certificates. Caddy can do this for you with a couple of lines of config: https://caddyserver.com/docs/caddyfile/directives/acme_serve...
Funnily enough, Caddy defaults to issueing 12 hour certificates for its local CA deployment.
> no certificate pinning anymore
Why bother with public certificate authorities if you're hardcoding the certificate data in the client?
> Instead of having closed systems, readonly, having to connect outside and update only once per year or more to update the certificates, you will have now all machines around the world that will have to allow quasi permanent connections to random certificate servers for the updating the system all the time.
Those hosts needed a bastion host or proxy of sorts to connect to the outside yearly, so they can still do that today. But I don't see the advantage of using the public CA infrastructure in a closed system, might as well use the Microsoft domain controller settings you probably already use in your network to generate a corporate CA and issue your 10 year certificates if you're in control of the network.
For those times when I only care about encryption, I'm forced to take on the extra burden that caring about identity brings.
Pet peeve.
There is the web as it always has been on http/1.1 that is a hyperlinked set of html documents hosted on a mishmash of random commercial and personal servers. Then there is modern http/2 http/3 CA TLS only web hosted as a service on some other website or cloud; mostly to do serious business and make money. The modern web's CA TLS-only ID scheme is required due to the complexity and risk of automatic javascript execution in browsers.
I wish we could have browsers that could support both use cases. But we can't because there's too much money and private information bouncing around now. Can't be whimsical, can't 'vibe code' the web ID system (ie, self signed not feasible in HTTP/3). It's all gotta be super serious. For everyone. And that means bringing in a lot of (well hidden by acme2 clients) complexity and overhead and centralization (everyone uses benevolent US based Lets Encrypt). This progressive lowering of the cert lifetimes is making the HTTP-only web even more fragile and hard to create lasting sites on. And that's sad.
TOFU works for the old web just great. It's completely incompatible with the modern web because major browsers will only ever compile their HTTP/* libs with flags that prevent TOFU and self-signed. You could host a http/1.1 self-signed and TOFU but everyone (except geeks) would be scared away or incapable of loading it.
So, TOFU works if you just want to do something like "gemini" protocol but instead of a new protocol just stick to original http and have a demographic of retro-enthusiasts and poor people. It's just about as accessible as gemni for most people (ie, not very) except for two differences. 1. Bots still love http/1.1 and don't care if it's plain text. 2. There's still a giant web of http/1.1 websites out there.
Which for some threat models is sufficiently good.
It's worth pointing out that MITM is also the dominant practical threat on the Internet: you're far more likely to face a MITM attacker, even from a state-sponsored adversary, than you are a fiber tap. Obviously, TLS deals with both adversaries. But altering the security affordances of TLS to get a configuration of the protocol that only deals with the fiber tap is pretty silly.
It’s how I know what my kids are up to.
It’s possible because I installed a trusted cert in their browsers, and added it to the listening program in their router.
Identity really is security.
More importantly - this debate gets raised in every single HN post related to TLS or CAs. Answering with a "my threat model is better than yours" or somehow that my threat model is incorrect is even more silly than offering a configuration of TLS without authenticity. Maybe if we had invested more effort in 801.x and IPSec then we would get those same guarantees that TLS offers, but for all traffic and for free everywhere with no need for CA shenanigans or shortening lifetimes. Maybe in that alternative world we would be arguing that nonrepudiation is a valuable property or not.
So no, IPSec couldn't have fixed the MITM issue without requiring a CA or some equivalent.
*I mistakenly wrote "certificate" here initially. Sorry.
Undoubtedly it is not best practice to lean on TOFU for good reason, but there are simply some lower stakes situations where engaging the CA system is a bit overkill. These are systems with few nodes (maybe just one) that have few users (maybe just one.) I have some services that I deploy that really only warrant a single node as HA is not a concern and they can easily run off a single box (modern cheap VPSes really don't sweat handling ~10-100 RPS of traffic.) For those, I pre-generate SSH server keys before deployment. I can easily verify the fingerprint in the excessively rare occasion it isn't already trusted. I am not a security expert, but I think this is sufficient at small scales.
To be clear, there are a lot of obvious security problems with this:
- It relies on me actually checking the fingerprint.
- SSH keys are valid and trusted indefinitely, so it has to be rotated manually.
- The bootstrap process inevitably involves the key being transmitted over the wire, which isn't as good as never having the key go over the wire, like you could do with CSRs.
This is clearly not good enough for a service that needs high assurance against attackers, but I honestly think it's largely fine for a small to medium web server that serves some small community. Spinning up a CA setup for that feels like overkill.
As for what I personally would do instead for a fleet of servers, personally I think I wouldn't use SSH at all. In professional environments it's been a long time since I've administered something that wasn't "cloud" and in most of those cloud environments SSH was simply not enabled or used, or if it was we were using an external authorization system that handled ephemeral keys itself.
That said, here I'm just suggesting that I think there is a gap between insecure HTTP and secure HTTPS that is currently filled by self-signed certificates. I'm not suggesting we should replace HTTPS usage today with TOFU, but I am suggesting I see the value in a middle road between HTTP and HTTPS where you get encryption without a strong proof of what you're connecting to. In practice this is sometimes the best you can really get anyway: consider the somewhat common use case of a home router configuration page. I personally see the value in still encrypting this connection even if there is no way to actually ensure it is secure. Same for some other small scale local networking and intranet use cases.
In practice, this means that it's way easier to just use unencrypted HTTP, which is strictly worse in every way. I think that is suboptimal.
A self-signed certificate has the benefit of being treated as a secure origin, but that's it. Sometimes you don't even care about that and just want the encryption. That's pretty much where this argument all comes from.
https://self-signed.badssl.com/
and when I clicked "Accept the risk and continue", the certificate was added to Certificate Manager. I closed the browser, re-opened it, and it did not prompt again.
I did the same thing in Chromium and it also worked, though I'm not sure if Chromium's are permanent or if they have a lifespan of any kind.
I am absolutely 100% certain that it did not always work that way. I remember a time when Firefox had an option to permanently add an exception, but it was not the default.
Either way, apologies for the misunderstanding. I genuinely did not realize that it worked this way, and it runs contrary to my previous experience dealing with self-signed certificates.
To be honest, this mostly resolves the issues I've had with self-signed certificates for use cases where getting a valid certificate might be a pain. (I have instead been using ACME with DNS challenge for some cases, but I don't like broadcasting all of my internal domains to the CT log nor do I really want to manage a CA. In some cases it might be nice to not have a valid internet domain at all. So, this might just be a better alternative in some cases...)
TOFU on ssh server keys... it's still bad, but less people are interested in intercepting ssh vs tls.
Also, I agree that TOFU in its own is certainly worse than having robust verification via the CA system. OTOH, SSH-style TOFU has some advantages over the CA system, too, at least without additional measures like HSTS and certificate pinning. If you are administering machines that you yourself set up, there is little reason to bother with anything more than TOFU because you'll cache the key shortly after the machine is set up and then get warned if a MITM is attempted. That, IMO, is the exact sort of argument in favor of having an "insecure but encrypted" sort of option for the web; small scale cases where you can just verify the key manually if you need to.
Mostly because ssh isn't something most people (eg. your aunt) uses, and unlike with https certificates, you're not connecting to a bunch of random servers on a regular basis.
Both defend against attackers the other cannot. In particular, the number of machines, companies and government agencies you have to trust in order to use a CA is much higher.
For example, TOFU where “first use” is a loopback ethernet cable between the two machines is stronger than a trust anchor.
Alternatively, you could manually verify + pin certs after first use.
The thing to think in comparing SSH to TLS is how frequent counterparty introductions are. New counterparties in SSH are relatively rare. Key continuity still needlessly exposes you to an grave attack in SSH, but really all cryptographic protocol attacks are rare compared to the simpler, more effective stuff like phishing, so it doesn't matter. New counterparties in TLS happen all the time; continuity doesn't make any sense there.
For example if the MITM requires you to have physical access to the machine, you'd also have to cover the physical security first. As long as that is not the case who cares for some connection hijack. If the data you are actually communicating is in addition just not worth the encryption but has to be because of regulation you are just doing the dance without it being worth it.
Not "never", because of HSTS preload, and browsers slowly adding scary warnings to plaintext connections.
https://preview.redd.it/1l4h9e72vp981.jpg?width=640&crop=sma...
However, ECH relies on a trusted 3rd party to provide the key of the server you are intending to talk to. So, it won't work if you have no way of authenticating the server beforehand the way GP was thinking about.
ECH gets the key from the DNS, and there's no real authentication for this data (DNSSEC is rare and is not checked by the browser). See S 10.2 [0] for why this is reasonable.
[0] https://tlswg.org/draft-ietf-tls-esni/draft-ietf-tls-esni.ht...
Safari did some half measures starting in Safari 15 (don't know the year) and now fully defaults to https first.
Firefox 136 (2025) now does https first as well.
With an intact trust chain, there is NO scenario where a 3rd party can see or modify what the client requests and receives beyond seeing the hostname being requested (and not even that if using ECH/ESNI)
Your "if you don't have an out-of-band reason to trust the server cert" is a fitting description of the global PKI infrastructure, can you explain why you see that as a problem? Apart from the fact that our OSes and browser ship out of the box with a scary long list of trusted CAs, some from fairly dodgy places?
let's not forget that BEFORE that TCP handshake there's probably a DNS lookup where the FQDN of the request is leaked, if you don't have DoH.
of course the L3/L4 can be (non) trivially intercepted by anyone, but that is exactly what TLS protects you against.
if simple L4 interception were all that is required, enterprises wouldn't have to install a trust root on end devices, in order to MITM all TLS connections.
the comment you were replying to is
> How is an attacker going to MITM an encrypted connection they don't have the keys for
of course they can intercept the connection, but they can't MITM it in the sense that MITM means -- read the communications. the kind of "MITM" / interception that you are talking about is simply what routers do anyway!
On the other hand providing the option may give a false sense of security. I think the main reason SSH isn't MitM'd all over the place is it's a pretty niche service and very often you do have a separate authentication method by sending your public key over HTTPS.
But like, no: the free Wi-Fi I'm using can't, in fact, MITM the encryption used by my connection... it CAN do a bunch of other shitty things to me that undermine not only my privacy but even undermine many of the things people expect to be covered by privacy (using traffic analysis on the size, timing, or destination of the packets that I'm sending), but the encryption itself isn't subject to the failure mode of SSH.
Establishing the initial exchange of crypto key material can be.
That's where certificates are important because they add identity and prevent spoofing.
With TOFU, if the first use is on an insecure network, this exchange is jeopardized. And in this case, the encryption is not with the intended partner and thus does not need to be attacked.
Hm? The reason I do use those services over a network I don't trust is because they're wrapped in authenticated, encrypted channels. The authenticated encryption happens at a layer above the network because I don't trust the network.
He wasn't proposing that encryption without authentication gets the full padlock and green text treatment.
You might be visiting myfavouriteshoes.com (a boutique shoe site you have been visiting for years), but you won't necessarily know if the regular owner is away or even if the business has been sold.
OK I will fess up. The truth is that I don't spend a lot of time in coffee shops but I do have a ton of crap on my LAN that demands high amounts of fiddle faddle so that the other regular people in my house can access stuff without dire certificate warnings, the severity of which seems to escalate every year.
Like, yes, I eat vegetables and brush my teeth and I understand why browsers do the things they do. It's just that neither I nor my users care in this particular case, our threat model does not really include the mossad doing mossad things to our movie server.
Alternatively, I would suggest letsencrypt with DNS verification. Little bit of setup work, but low maintenance work and zero effort on clients.
1. Wire up LetsEncrypt certs for things running on your LAN, and all the "dire certificate warnings" go away.
2. Run a local ACME service, wire up ACME clients to point to that, make your private CA valid for 100 years, trust your private CA on the devices of the Regular People in your house.
I did this dance a while back, and things like acme.sh have plugins for everything from my Unifi gear to my network printer. If you're running a bunch of servers on your LAN, the added effort of having certs is tiny by comparison.
Yes I am being snarky - network level MITM resistance is wonderful infrastructure and CT is great too.
If we encrypt everything we don't need AuthN/Z.
Encrypt locally to the target PK. Post a link to the data.
The goal isn't to make everything impossible to break. The goal is to provide Just Enough security to make things more difficult. Legally speaking, sniffing and decrypting encrypted data is a crime, but sniffing and stealing unencrypted data is not.
That's an important practical distinction that's overlooked by security bozos.
Separation between CAs and domains allows browsers to get rid of incompetent and malicious CAs with minimal user impact.
Without DNSSEC's guarantees, the DANE TLSA records would be as insecure as self-signed certificates in WebPKI are.
It's not enough to have some certificate from some CA involved. It has to be a part of an unbroken chain of trust anchored to something that the client can verify. So you're dependent on the DNSSEC infrastructure and its authorities for security, and you can't ignore or replace that part in the DANE model.
However, we could use some form of Certificate Transparency that would somehow work with DANE.
Also it still protects you from everyone who isn't your DNS provider, so it's valuable if you only need a medium level of security.
They can, but they'll also get caught thanks to CT. No such audit infrastructure exists for DANE/DNSSEC.
> It doesn't add any security to have PKI separate from DNS.
One can also get a certificate for an IP addresses.
1. mobile apps.
2. enterprise APIs. I dealt with lots of companies that would pin the certs without informing us, and then complain when we'd rotate the cert. A 47-day window would force them to rotate their pins automatically, making it even worse of a security theater. Or hopefully, they switch rightly to CAA.
Health Systems love pinning certs, and we use an ALB with 90 day certs, they were always furious.
Every time I was like "we can't change it", and "you do trust the CA right?", absolute security theatre.
It’s become a big part of my work and I’ve always just had a surface knowledge to get me by. Assume I work in a very large finance or defense firm.
You should really generate a new key for each certificate, in case the old key is compromised and you don't know about it.
What would really be nice, but is unlikely to happen would be if you could get a constrained CA certificate issued for your domain and pin that, then issue your own short term certificates from there. But if those are wide spread, they'd need to be short dated too, so you'd need to either pin the real CA or the public key and we're back to where we were.
(emphasis added)
Pump the brakes there, digicert. Price is based on an annual subscription. CA costs will actually go up an infinitesimal amount, but they’re already nearly zero to begin with. Running a CA has got to be one of the easiest rackets in the world.
Given that the overarching rationale here is security, what made them stop at 47 days? If the concern is _actually_ security, allowing a compromised cert to exist for a month and a half is I guess better than 398 days, but why is 47 days "enough"?
When will we see proposals for max cert lifetimes of 1 week? Or 1 day? Or 1 hour? What is the lower limit of the actual lifespan of a cert and why aren't we at that already? What will it take to get there?
Why are we investing time and money in hatching schemes to continually ratchet the lifespan of certs back one more step instead of addressing the root problems, whatever those are?
Yeah not sure about that one...
I've read the basics on Cloudflare's blog and MDN. But at my job, I encountered a need to upload a Let's encrypt public cert to the client's trusted store. Then I had to choose between Let's encrypt's root and intermediate certs, between key types RSA and ECDSA. I made it work, but it would be good to have an idea of what I'm doing. For example why root RSA key worked even though my server uses ECDSA cert. Before I added the root cert to a trusted store, clients used to add fullchain.pem from the server and it worked too — why?
- If you're looking for a concise (yet complete) guide: https://www.feistyduck.com/library/bulletproof-tls-guide/
- OpenSSL Cookbook is a free ebook: https://www.feistyduck.com/library/openssl-cookbook/
- SSL/TLS and PKI history: https://www.feistyduck.com/ssl-tls-and-pki-history/
- Newsletter: https://www.feistyduck.com/newsletter/
- If you're looking for something comprehensive and longer, try my book Bulletproof TLS and PKI: https://www.feistyduck.com/books/bulletproof-tls-and-pki/
In another instance to connect to a server, only the root certificate is present in the trust store. Does it mean encryption can be performed with just the root certificate.
Yep, that me.
Thanks for the blog post!
No idea why the RSA key worked even though the server used RSA — maybe check into the recent cross-signing shenanigans that Let’s Encrypt had to pull to extend support for very old Android versions.
If the information is relatively unchanged and the details well documented why not ask questions to fill in the gaps?
The Socratic method has been the best learning tool for me and I'm doubling my understanding with the LLMs.
It seems to me like compromised keys are rare. It also seems like 47 days is low enough to be inconvenient, but not low enough to prevent significant numbers of people from being compromised if there is a compromised key.
It's not only key mismanagement that is being mitigated. You also have to prove more frequently that you have control of the domain or IP in the certificate.
In essence it brings a working method of revocation to WebPKI.
> but not low enough to prevent significant numbers of people from being compromised if there is a compromised key.
Compared to a year?
That doesn't particularly matter; if someone takes over the domain but doesn't have a leaked key, they can't sign requests for the domain with my cert. It takes a leaked key for this to turn into a vulnerability.
On the other hand, anyone that owns the domain can get a perfectly valid cert any time, no need to exploit anything. And given that nobody actually looks at the details of the cert owner in practice, that means that if you lose the domain, the new owner is, treated as legit. No compromises needed.
The only way to prevent that is to pin the cert, which this short rotation schedule makes harder, or pin the public key and be very careful to not regenerate your keys when you submit a new CSR.
In short: Don't lose your domain.
> Compared to a year?
Typically these kinds of things have an exponential dropoff, so most of the exploited folks would be soon after the compromise. I don't think that shortening to this long a period, rather than (say) 24h would make a material difference.
But, again, I'm also not sure how many people were compromised via anything that this kind of rotation would prevent. It seems like most exploits depend on someone either losing control over the domain (again, don't do that; the current issuance model doesn't handle that), or just being phished via a valid cert on an unrelated domain.
Do you have concrete examples of anyone being exploited via key mismanagement (or not proving often enough that they have control over a domain)?
This reminds me a bit of trying to get TLS 1.2 support in browsers before the revelation that the older versions (especially SSL3) were in fact being exploited all the time directly and via downgrading. Since practically nobody complained (out of ignorance) and, at the time, browsers didn't collect metrics and phone home with them (it was a simpler time), there was no evidence of a problem. Until there was massive evidence of a problem because some people bothered to look into and report it. Journalism-driven development shouldn't be the primary way to handle computer security.
It does, if someone gets temporary access, issues a certificate and then keeps using it to impersonate something. Now the malicious actor has to do it much more often, significantly increasing chances of detection.
At least, that's what the rules say. In practice CAs have a really hard time saying no to a multi-week extension because a too-big-to-fail company running "critical infrastructure" isn't capable of rotating their certs.
Short cert duration forces companies to automate cert renewal, and with automation it becomes trivial to rotate certs in an acceptable time frame.
I suppose technically you can get approximately the same thing with 24-hour certificate expiry times. Maybe that's where this is ultimately heading. But there are issues with that design too. For example, it seems a little at odds with the idea of Certificate Transparency logs having a 24-hour merge delay.
It also lowers the amount of time it’d take for a top-down change to compromise all outstanding certificates. (Which would seen paranoid if this wasn’t 2025.)
The real reason was Snowden. The jump in HTTPS adoption after the Snowden leaks was a virtual explosion; and set HTTPS as the standard for all new services. From there, it was just the rollout. (https://www.eff.org/deeplinks/2023/05/10-years-after-snowden...)
(Edit because I'm posting too fast, for the reply):
> How do you enjoy being dependent on a 3rd party (even a well intentioned one) for being on the internet?
Everyone is reliant on a 3rd party for the internet. It's called your ISP. They also take complaints and will shut you down if they don't like what you're doing. If you are using an online VPS, you have a second 3rd party, which also takes complaints, can see everything you do, and will also shut you down if they don't like what you're doing; and they have to, because they have an ISP to keep happy themselves. Networks integrating with 3rd party networks is literally the definition of the internet.
Let's Encrypt... Cloudflare... useful services right? Or just another barrier to entry because you need to set up and maintain them?
https://letsencrypt.org/2025/02/20/first-short-lived-cert-is...
> The ballot argues that shorter lifetimes are necessary for many reasons, the most prominent being this: The information in certificates is becoming steadily less trustworthy over time, a problem that can only be mitigated by frequently revalidating the information.
> The ballot also argues that the revocation system using CRLs and OCSP is unreliable. Indeed, browsers often ignore these features. The ballot has a long section on the failings of the certificate revocation system. Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.
Personally I don't really buy this argument. I don't think the web sites that most people visit (especially highly-sensitive ones like for e-mail, financial stuff, a good portion of shopping) change or become "less trustworthy" that quickly.
And I would argue that MITMing communications is a lot hard for (non-nation state) attackers than compromising a host, so trust compromise is a questionable worry.
By that logic, we don't really need certificates, just TOFU.
It works fairly well for SSH, but that tends to be a more technical audience. But doing a "Always trust" or "Always accept" are valid options in many cases (often for internal apps).
How "should" it work? Is there a known-better way?
I am aware of them.
As someone in the academic sphere, with researchers SSHing into (e.g.) HPC clusters, this solves nothing for me from the perspective of clients trusting servers. Perhaps it's useful in a corporate environment where the deployment/MDM can place the CA in the appropriate place, but not with BYOD.
Issuing CAs to users, especially if they expire is another thing. From a UX perspective, we can tie password credentials to things like on-site Wifi and web site access (e.g., support wiki).
So SSH certs certainly have use-cases, and I'm happy they work for people, but TOFU is still the most useful in the waters I swim in.
It was suggested by someone else: I commented TOFU works for SSH, but is probably not as useful for web-y stuff (except for maybe small in-house stuff).
Personally I'm somewhat sad that opportunistic encryption for the web never really took off: if folks connect on 80, redirect to 443 if you have certs 'properly' set up, but even if not do an "Upgrade" or something to move to HTTPS. Don't necessary indicate things are "secure" (with the little icon), but scramble the bits anyway: no false sense of security, but make it harder for tapping glass in bulk.
And significant part of security is concentrated around the way Certifying Authorities validate the domain ownership. (So called challenges).
Next, maybe clients can run those challenges directly, instead of relying onto certificates? For example, when connecting a server, client client sends two unique values, and the server must create DNS record <unique-val-1>.server.com with record value of the <unique-val-2>. Client check that such record is created and thus the server has proven it controls the domain name.
Auth through DNS, that's what it is. We will just need to speed up the DNS system.
The reason an attacker can't MITM Let's Encrypt (or similar ACME issuers) is because they request the challenge-response from multiple locations, making sure a simple MITM against them doesn't work.
A fully DNS based "certificate setup" already exists: DANE, but that requires DNSSEC, which isn't wildly used.
But that just a nuance that could be fixed. I elaborate little more on what I mean in https://news.ycombinator.com/item?id=43712754
Thx for pointing to DANE.
I would be more concerned about the number of certificates that would need to be issued and maintained over their lifecycle - which now scales with the number of unique clients challenging your server (or maybe I misunderstand, and maybe there aren't even certificates any more in this scheme).
Not to mention the difficulties of assuring reasonable DNS response times and fresh, up-to-date results when querying a global eventually consistent database with multiple levels of caching...
[0] https://letsencrypt.org/docs/challenge-types/#dns-01-challen...
I am not saying this scheme is really practical currently.
That's just an imaginary situation coming to mind, illustrating the increased importance of domain ownership validation procedures used by Certifying Authorities. Essentially the security now comes down to the domain ownership validation.
Also a correction. The server not simply puts <unique-val-2>, it puts sha256(<unique-val-2> || '.' || <fingerprint of the public key of the account>).
Yes, the ACME protocol uses some account keys. Private key signs a requests for new cert, and public key fingerprint during domain ownership validation confirms that the challenge response was intended for that specific account.
I am not suggesting ACME can be trivially broken.
I just realized that risks of TLS certs breaking is not just risk of public key crypto being broken, but also includes the risks of domain ownership validation protocols.
Side note: I wonder how much pressure this puts on providers such as LetsEncrypt, especially with the move to validate IPs. And more specifically IPv6…
I don't disagree with you that it should be super common. But it's surprisingly not in many businesses. Heck, Okta (nominally a large security company) still sends out notifications every time they change certificates and publishes a copy of their current correct certs in github: https://github.com/okta/okta-pki - How they do the actual rotation? No idea, but... I'd guess it's not automatic with that level of manual notification/involvement. (Happy to be proven wrong though).
Self-signed custom certs also does that. But those are demonized.
Also SSL also tries to define a ip-dns certification of ownership, kind of.
There's also a distinct difference between 'this cert expired last week' and 'this cert doesn't exist' and mitm attack. Expired? Just give a warning, not a scare screen. MITM? Sure give a big scary OHNOPE screen.
But, yeah, 47 days is going to wreck havok on network and weird devices.
The only real alternative to checking who signed a certificate is checking the certificate's fingerprint hash instead. With self-signed certificates, this is the only option. However, nobody does this. When presented with an unknown certificate, people will just blindly trust it. So self-signed certificates at scale are very susceptible to MITM. And again, you're not going to know it happened.
Encryption without authentication prevents passive snooping but not active and/or targeted attacks. And the target may not be you, it may be the other side. You specifically might not be worth someone's time, but your bank and all of its other customers too, probably is.
OCSP failed. CRLs are not always being checked. Shorter expiry largely makes up for the lack of proper revocation. But expiration must consequently be treated as no less severe than revocation.
Homoglyph attacks are a thing. And I can pay $10 for a homoglyph name. No issues. I can get a webserver on a VM and point DNS. From there I can get a Letsencrypt. Use Nginx to point towards real domain. Install a mailserver and send out spoofed mails. You can also even set up SPF/DKIM/DMARC and have a complete attested chain.
And its all based on a fake DNS, using nothing more than things like a Cyrillic 'o'.
And, the self-signed issue is also what we see with SSH. And it mostly just works too.
TLS with Web PKI is a significantly more secure system when dealing with real people, and centralized PKI systems in general are far more scalable (but not hardly perfect!) compared to decentralized trust systems, with common SSH practices near the extreme end of decentralized. Honestly, the general state of SSH security can only be described as "working" due to a lack of effort from attackers more than the hygienic practices of sysadmins.
Homoglyph attacks are a real problem for HTTPS. Unfortunately, the solutions to that class of problem have not succeeded. Extended Validation certificates ended up a debacle; SRP and PAKE schemes haven't taken off (yet?); and it's still murky whether passkeys are here to stay. And a lot of those solutions still boil down to TOFU since, essentially, they require you to have visited the site at least once before. Plus, there remain fallback options that are easier to phish against. Maybe these problems would have been more solvable if DNSSEC succeeded, but that didn't happen either.
I'm not entirely sure about how effective passkeys would be against homoglyph-assisted MITM though. Assuming you've visited the legitimate domain before and established your passkey at that time, your passkey wouldn't be selected by the browser for the fake domain. But if you started with the fake domain, and logged in through it using a non-passkey method (including first sign up or lost-credential recovery), then I would think the attacker could just enroll his own passkey on your behalf. Now, if we layered passkeys on top of mTLS, then we could almost entirely eliminate the MITM risk!
You've lost me at mTLS here. At some point it starts to feel like we're advocating for security protocols just so we can fit them all in somewhere.
Ultimately, I think the practical solution to homoglyphs is in the presentation layer, whether it be displaying different scripts in different ways, warning when scripts are mixed, or some other kind of UX rather than protocol change. The only protocol change I can think of to address them would be to go back to ASCII only (and even that is more of a presentation issue since IDNs are just Punycode).
[1]: https://googlechrome.github.io/chromerootprogram/#321-applic...
I've scoured the CA/Browser Forum BRs and ballots, Chrome Root Store policies, and CCADB policies, and can't find mention of this coming restriction.
In case it helps - am the CTO of a large CA, so (un)fortunately aware of what's happening and when.
All corresponding unexpired and unrevoked subscriber (i.e., TLS server authentication) certificates issued on or after June 15, 2026 MUST include the extendedKeyUsage extension and only assert an extendedKeyUsage purpose of id-kp-serverAuth.
Thanks!I've done this and it works very well. I had a Digital Ocean droplet so used their DNS service for the challenge domain.
https://letsencrypt.org/docs/challenge-types/#dns-01-challen...
It also occurred to me that there's nothing(?) preventing you from concurrently having n valid certificates for a particular hostname, so you could just enroll distinct certificates for each host. Provided the validation could be handled somehow.
The other option would maybe be doing DNS-based validation from a single orchestrator and then pushing that result onto the entire fleet.
It is a bit funny that LetsEncrypt has non-expiring private keys for their accounts.
I use this to sync users between small, experimental cluster nodes.
Some notes I have taken: https://notes.bayindirh.io/notes/System+Administration/Synci...
> Get certificates for remote servers - The tokens used to provide validation of domain ownership, and the certificates themselves can be automatically copied to remote servers (via ssh, sftp or ftp for tokens). The script doesn't need to run on the server itself. This can be useful if you don't have access to run such scripts on the server itself, e.g. if it's a shared server.
Do people really backup their https certificates? Can't you generate a new one after restoring from backup?
Edit: it’s configured under Trigger -> Outbound Probe -> “SSL Certificate Minimum Expiration Duration”
I tend to have secondary scripts that checks if the cert in certbots dir is newer than whatever is installed for a service, and if so install it. Some services prefer the cert in certain formats, some services want to be reloaded to pick up a new cert etc, so I put that glue in my own script and run it from cron or a systemd timer.
Right now Let's Encrypt recommends renewing your 90d certificates every 60 days, which means that there is a 30 day window between recommended to renew and expiry. This feels relatively comfortable to me. A long vacation may be longer than 30 days but it is rare and there is probably other maintenance that you should be doing in this time (although likely routine like security updates rather than exceptional like figuring out why your certificate isn't renewing).
So if 47 days ends up meaning renew every 17 days and still have that 30 day buffer I would be quite happy. But what I fear is that they will recommend (and set rate limits based on) renewing every 30 days with a 17 day buffer which is getting a little short for comfort IMHO. While big organizations will have a 24h oncall and medium organizations will have many business hours to figure it out is sucks for individuals who what to go away for a few weeks without worrying about debugging their certificate renewal until they get home.
"Therefore, the Lunar Bureau of the United Nations Peace Keeping Force DataWatch has created the LINK, the Lunar Information Network Key. There are currently nine thousand, four hundred and two Boards on Luna; new Boards must be licensed before they can rent lasercable access. Every transaction--every single transaction--which takes place in the Lunar InfoNet is keyed and tracked on an item-by-item basis. The basis of this unprecedented degree of InfoNet security is the Lunar Information Network Key. The Key is an unbreakable encryption device which the DataWatch employs to validate and track every user in the Lunar InfoNet. Webdancers attempting unauthorized access, to logic, to data, to communications facilities, will be punished to the full extent of the law."
from The Long Run (1989)
Your browser won't access a site without TLS; this is for your own protection. TLS certificates are valid for one TCP session. All certs are issued by an organization reporting directly to a national information security office; if your website isn't in compliance with all mandates, you stop getting certs.
I could have probably done more with Lets Encrypt automation to stay with my old VPS but given that all my professional work is with AWS its really less mental work to drop my old VPS.
Times they are a changing
Or just pay Amazon, I guess. Easier than thinking.
It goes from a "rather nice to have" to "effectively mandatory".
Oh yes, vendors will update their legacy NAS/IPMI/whatever to include certbot. This change will have the exact opposite effect - expired self signed certificates everywhere on the most critical infrastructure.
Nope. People will create self-signed certs and tell people to just click "accept".
Dev guys think everything is solvable via code, but hardware guys know this isn't true. Hardware is stuck in fixed lifecycles and firmware is not updated by the vendors unless it has to be. And in many cases updated poorly. No hardware I've ever come across that supports SSL\TLS (and most do nowadays) offers any automation capability in updating certs. In most cases, certs are manually - and painfully - updated with esoteric CLI cantrips that require dancing while chanting to some ancient I.T. God for mercy because the process is poorly (if at all) documented and often broken. No API call or middelware is going to solve that problem unless the manufacturer puts it in. In particular, load balancers are some of the worst at cert management, and remember that not everyone uses F5 - there are tons of other cheaper and popular alternatives most of which are atrocious at security configuration management. It's already painful enough to manage certs in an enterprise and this 47 day lifecycle is going to break things. Hardware vendors are simply incompetent and slow to adapt to security changes. And not everyone is 100% in the cloud - most enterprises are only partially in that pool.
Perhaps the new requirements will give them additional incentives.
The larger issue is actually our desire to deprecate cipher suites so rapidly though, those 2-3 year old ASICs that are functioning well become e-waste pretty quickly when even my blog gets a Qualys “D” rating after having an “A+” rating barely a year ago.
How much time are we spending on this? The NSA is literally already in the walls.
I've been in the cert automation industry for 8 years (https://certifytheweb.com) and I do still hear of manual work going on, but the majority of stuff can be automated.
For stuff that genuinely cannot be automated (are you sure you're sure) these become monthly maintenance tasks, something cert management tools are also now starting to help with.
We're planning to add tracking tasks for manual deployments to Certify Management Hub shortly (https://docs.certifytheweb.com/docs/hub/), for those few remaining items that need manual intervention.
There are no ready-made tools available to automate such deployments. Especially if a certificate must be the same for each of the hosts, fingerprint included. Having a single, authoritative certificate for a domain and its wildcard subdomains deployed everywhere is much simpler to monitor. It does not expose internal subdomains in certificate transparency logs.
Unfortunately, organizations (persons) involved in decisions, do not provide such tools in advance.
> The information in certificates is becoming steadily less trustworthy over time, a problem that can only be mitigated by frequently revalidating the information.
This is patently nonsensical. There is hardly any information in a certificate that matters in practice, except for the subject, the issuer, and the expiration date.
> Shorter lifetimes mitigate the effects of using potentially revoked certificates.
Sure, and if you're worried about your certificates being stolen and not being correctly revoked, then by all means, use a shorter lifetime.
But forcing shorter lifetimes on everyone won't end up being beneficial, and IMO will create a lot of pointless busywork at greater expense. Many issuers still don't support ACME.
I fairly regularly get cert expired problems because the admin is doing it as the yak shaving for a secondary hobby
Even certbot got deprecated, so my IRC network has to use some janky shell scripts to rotate TLS… I’m considering going back to traditional certs because I geo-balance the DNS which doesn’t work for letsencrypt.
The issue is actually that I have multiple domains handled multiple ways and they all need to be letsencrypt capable for it to work and generate a combined cert with SAN’s attached.
An semi-distributed (intercity) Kubernetes cluster can reasonably change its certificate chain every week, but it needs an HSM if it's done internally.
Otherwise, for a website, once or twice a year makes sense if you don't store anything snatch-worthy.
You don't say. Why are the defaults already 90 days or less then?
90 days makes way more sense for the "average website" which handles members, has a back office exposed to the internet, and whatnot.
Why do you think all the average web sites have to handle members?
Forums? Nope. Blogging platforms? Nope. News sites? Nope. Wordpresss powered personal page? Nope. Mailing lists with web based management? Nope. They all have members.
What doesn’t have members or users? Static webpages. How much of the web is a completely static web page? Negligible amount.
So most of the sites have much more to protect than meets the eye.
Neglecting the independent web is exactly what led to it dying out and the Internet becoming corporate algorithm-driven analytics machine. Making it harder to maintain your own, independent website, which does not rely on any 3rd-party to host or update, will just make less people bother.
Web is a bit different than you envision/think.
Why can't this site just upload HTML files to their web server?
> Eyeball optimization: Different titles, cutting summaries where it piques interest most, some other A/B testing...
Any non predatory practices you can add to the list?
I'm not a web developer, and I don't do anything similar on my pages, blog posts, whatever, so I don't know.
The only non-predatory way to do this is to being honest/transparent and don't pulling tricks on people.
However, I think, A/B testing can be used in a non-predatory way in UI testing, by measuring negative comment between two new versions, assuming that you genuinely don't know which version is better for the users.
1. Journalists shall be able to write new articles and publish them ASAP, possibly from remote locations.
2. Eyeball optimization: Different titles, cutting summaries where it piques interest most, some other A/B testing... So you need a data structure which can be modified non-destructively and autonomously.
Plus many more things, possibly. I love static webpages as much as the next small-web person, but we have small-web, because the web is not "small" anymore.
But on a more serious note, can someone more familiar with these standards and groups explain the scope of TLS certificate they mean for these lifetime limits?
I assume this is only server certs and not trust root and intermediate signing certs that would get such short lifetimes? It would be a mind boggling nightmare if they start requiring trust roots to be distributed and swapped out every few weeks to keep software functioning.
To my gen X internet pioneer eyes, all of these ideas seem like easily perverted steps towards some dystopian "everything is a subscription" access model...
The article notes this explicitly: the goal here is to reduce the number of online CA connections needed. Reducing certificate lifetimes is done explicitly with the goal of reducing the Web PKI's dependence on OCSP for revocation, which currently has the online behavior you're worried about here.
(There's no asymptotic benefit to extremely short-lived certificates: they'd be much harder to audit, and would be much harder to write scalable transparency schemes for. Something around a week is probably the sweet spot.)
"When they voiced objection, Captain Black replied that people who cared about security would not mind performing all the security theatre they had to. To anyone who questioned the effectiveness of the security theatre, he replied that people who really did owe allegiance to their employer would be proud to take performative actions as often as he forced them to. The more security theatre a person performed, the more secure he was; to Captain Black it was as simple as that."
I need it more frequently to get more time in case there is an error as I tend to ignore the error e-mails for multiple weeks due to my fatigue from handling of various kinds of certificates.
Personally I also have an HTTP mirror for my more important projects when availability is more important than security of the connection.
There should be 1 change from 365 to 47 days. This industry doesnt need constant changes, which will force everyone to automating renewals anyway.
For example, EV carts had the green bar that was a soft way to promote their presence/use over the normal ones. That bar, started as a strong evidence on the url box and lost that look with the time.
Something like that let the owner to decide and, maybe, the user push their use because it feels more secure, not directly the CA.
They are purchased to provide encryption. Nobody checks the details of a cert and even if they did they wouldn't know what to look for in a counterfeit anyway.
This is just another gatekeeping measure to make standing up, administering, and operating private infrastructure difficult. "Just use Google / AWS / Azure instead."
There are lots of issues with trust and social and business identities in general, but for the purpose of encryption, the problem can be simplified to checking of the host name (it's effectively an out of band async check that the destination you're talking to is the same destination that independent checks saw, so you know your connection hasn't been intercepted).
You can't have effective TLS encryption without verifying some identity, because you're encrypting data with a key that you negotiate with the recipient on the other end of the connection. If someone inserts themselves into the connection during key exchange, they will get the decryption key (key exchange is cleverly done that a passive eavesdropper can't get the key, but it can't protect against an active eavesdropper — other than by verifying the active participant is "trusted" in a cryptographic sense, not in a social sense).
This is where the disconnect comes in. Me and you know that the green icon doesn't prove identity. It proves certificate validity. But that's not what this is "sold as" by the browser or the security community as a whole. I can buy the domain Wаl-Mart right now and put a certificate on it that says Wаl-Mаrt and create the conditions for that little green icon to appear. Notice that I used U+0430 instead of the letter "a" that you're used to.
And guess what... The identity would match and pass every single test you throw at it. I would get a little green icon in the browser and my certificate would be good. This attack fools even the brightest security professionals.
So you see, Identity isn't the value that people expect from a certificate. It's the encryption.
Users will allow a fake cert with a green checkmark all day. But a valid certificate with a yellow warning is going to make people stop and think.
I care that when I type walmart.com, I'm actually talking to walmart.com. I don't look at the browser bar or symbols on it. I care what my bookmarks do, what URLs I grab from history do, what my open tabs do, and what happens when I type things in.
Preventing local DNS servers from fucking with users is critical, as local DNS is the weakest link in a typical setup. They're often run by parties that must be treated as hostile - basically whenever you're on public wifi. Or hell, when I'm I'm using my own ISP's default configuration. I don't trust Comcast to not MitM my connection, given the opportunity. I trust technical controls to make their desire to do so irrelevant.
Without the identity component, any DNS server provided by DHCP could be setting up a MitM attack against absolutely everything. With the identity component, they're restricted to DoS. That's a lot easier to detect, and gets a lot of very loud complaints.
So no, nobody will ever look at a certificate.
When I look at them, as a security professional, I usually need to rediscover where the fuck they moved the certs details again in the browser.
I said exactly the words I meant.
> I don't look at the browser bar or symbols on it. I care what my bookmarks do, what URLs I grab from history do, what my open tabs do, and what happens when I type things in.
Without the identity component, I can't trust that those things I care about are insulated from local interference. With the identity component, I say it's fine to connect to random public wifi. Without it, it wouldn't be.
That's the relevant level. "Is it ok to connect to public wifi?" With identity validation, yes. Without, no.
You don’t mean “Walmart”, but 99% of the population thinks you do.
Is it OK to trust this for anything important? Probably not. Is OK to type your credit card number in? Sure. You have fraud protection.
Identity is the only purpose that certificates serve. SSL/TLS wouldn't have needed certificates at all if the goal was purely encryption: key exchange algorithms work just fine without either side needing keys (e.g. the key related to the certificate) ahead of time.
But encryption without authentication is a Very Bad Idea, so SSL was wisely implemented from the start to require authentication of the server, hence why it was designed around using X.509 certificates. The certificates are only there to provide server authentication.
"example.com" is an identity just like "Stripe, Inc"[1]. Just because it doesn't have a drivers license or article of incorporation, doesn't mean it's not an identity.
[1] https://web.archive.org/web/20171222000208/https://stripe.ia...
>This is just another gatekeeping measure to make standing up, administering, and operating private infrastructure difficult. "Just use Google / AWS / Azure instead."
Certbot is trivial to set up yourself, and deploying it in production isn't so hard that you need to be "Google / AWS / Azure" to do it. There's plenty of IaaS/PaaS services that have letsencrypt, that are orders of magnitude smaller than those hyperscalers.
I get that there are some fringe cases where it’s not possible but for the rest - automate and forget.
But there's also security implications: https://news.ycombinator.com/item?id=43708319
Do I need to update certbot in all my servers? Or would they continue to work without the need to update?
If you can't make this happen, don't use WebPKI and use internal PKI.
It's great to be environmentally conscious, but if reducing carbon emissions is your goal, complaining about this is a lot like saying that people shouldn't run marathons, because physical activity causes humans to exhale more CO2.
We are effectively talking about the entire world wide web generating multiple highly secure cryptograph key pairs every 47 days. That is a lot of CPU cycles.
Also you not picking up on the Futurama quote is disappointing.
We aren't cracking highly secure key pairs. We're making them.
On my computer, to create a new 4096-bit key takes about a second, in a single thread. For something I now have to do fewer than 8 times per year. On a 16-core CPU with a TDP of 65 watts, we can estimate that this took 0.0011 watt-hours.
Yes, there are a lot of websites, close to a billion of them. No, this still is not some onerous use of electricity. For the whole world, this is an additional usage of a bit over 9000 kWh annually. Toss up a few solar panels and you've offset the whole planet.
but you think think it would take a decade for the entire internet to use as much power as a single AI video?
That one AI video used about 100kWh, so about four days worth of HTTPS for the whole internet.
As for certs... maybe at the start it was hard, but it's really quite easy to host things online, with a valid certificate. There are many CDN services like Cloudflare which will handle it for you. There are also application proxies like Traefik and Caddy which will get certs for you.
Most people who want their own site today, will use Kinsta or SquareSpace or GitHub pages any one of thousands of page/site hosting services. All of whom have a system for certificates that is so easy to use, most people don't even realize it is happening.
Every single thing you mentioned is plugged in to the tier-1 surveillance brokers. I am talking plain files on single server shoved in a closet, or cheap VPS. I don't often say this but I really don't think you “get” it.
Your attitude is so dismissive to the general public. We should be encouraging people to learn the little bits they want to learn to achieve something small, and instead we are building this ivory tower all-or-nothing stack. For what, job security? Bad mindset.
CAs and web PKI are a bad joke. There's too many ways to compromise security, there's too many ways to break otherwise-valid web sites/apps/connections, there's too many organizations that can be tampered with, the whole process is too complex and bug-prone.
What Web PKI actually does, in a nutshell, is prove cryptographically that at some point in the past, there was somebody who had control of either A) an e-mail address or B) a DNS record or C) some IP space or D) some other thing, and they generated a certificate through any of these methods with one of hundreds of organizations. OR it proves that they stole the keys of such a person.
It doesn't prove that who you're communicating with right now is who they say they are. It only proves that it's someone who, at some point, got privileged access to something relating to a domain.
That's not what we actually want. What we actually want is to be assured this remote host we're talking to now is genuine, and to keep our communication secret and safe. There are other ways to do that, that aren't as convoluted and vulnerable as the above. We don't have to twist ourselves into all these knots.
I'm hopeful changes like these will result in a gradual catastrophy which will push industry to actually adopt simpler, saner, more secure solutions. I've proposed one years ago but nobody cares because I'm just some guy on the internet and not a company with a big name. Nothing will change until the people with all the money and power make it happen, and they don't give a shit.
Everyone in the CA/B should be fired from their respective employers, and we honestly need to wholesale plan to dump PKI by 2029 if we can't get a resolution to this.
It's really not that hard to automate renewals and monitor a system's certificate status from a different system, just in case the automation breaks and for things that require manual renewal steps.
I get that it's harder in large organisations and that not everything can be automated yet, but you still have a year before the certificate lifetime goes down to 200 days, which IMO is pretty conservative.
With a known timeline like this, customers/employees have ammunition to push their vendors/employers to invest into automation and monitoring.
None of the platforms which I deal with will likely magically support automated renewal in the next year. I will likely spend most of the next year reducing our exposure to PKI.
Smaller organizations dependent on off the shelf software will be killed by this. They'll probably be forced to move things to the waiting arms of the Big Tech cloud providers that voted for this. (Shocker.) And it probably won't help stop the bleeding.
And again, there's no real world security benefit. Nobody in the CA/B has ever discussed real world examples of threats this solves. Just increasingly niche theoretical ones. In a zero cost situation, improving theoretical security is good, but in a situation like this where the cost is real fragility to the Internet ecosystem, decisions like this need to be justified.
Unfortunately the CA/B is essentially unchecked power, no individual corporate member is going to fire their representatives for this, much less is there a way to remove everyone that made this incredibly harmful decision.
This is a group of people who have hammers and think everything is a nail, and unfortunately, that includes a lot of ceramic and glass.
This will be painful for people in the short term, but in the long term I believe it will make things more automated, more secure, and less fragile.
Browsers are the ones pushing for this change. They wouldn't do it if they thought it would cause people to see more expired certificate warnings.
> Unfortunately the CA/B is essentially unchecked power, no individual corporate member is going to fire their representatives for this, much less is there a way to remove everyone that made this incredibly harmful decision.
Representatives are not voting against the wishes/instructions of their employer.
Unfortunately the problem is likely too removed from understanding for employers to care. Google and Microsoft do not realize how damaging the CA/B is, and probably take the word of their CA/B representatives that the choices that they are making are necessary and good.
I doubt Satya Nadella even knows what the CA/B is, much less that he pays an employee full-time to directly #### over his entire customer base and that this employee has nearly god-level control over the Internet. I have yet to see an announcement from the CA/B that represented a competent decision that reflected the reality of the security industry and business needs, and yet... nobody can get in trouble for it!
If an organisation ignores all those options, then I suppose they should keep doing it manually. But at the end of the day, that is a choice.
Maybe they'll reconsider now that the lifetime is going down or implement their own client if they're that scared of third party code.
Yeah, this will inconvenience some of the CA/B participant's customers. They knew that. It'll also make them and everyone else more secure. And that's what won out.
The idea that this change got voted in due to incompetence, malice, or lack of oversight from the companies represented on the CA/B forum is ridiculous to me.
How many of those are first-party/vetted by Microsoft? I'm not sure you understand how enterprises or secure environments work, we can't just download whatever app someone found on the Internet that solves the issue.
Certify The Web has a 'Microsoft Partner' badge. If that's something your org values, then they seem worth looking into for IIS.
I can find documentation online from Microsoft where they use YARP w/ LettuceEncrypt, Caddy, and cert-manager. Clearly Microsoft is not afraid to tell customers about how to use third party solutions.
Yes, these are not fully endorsed by Microsoft, so it's much harder to get approval for. If an organisation really makes it impossible, then they deserve the consequences of that. They're going to have problems with 397 day certificates as well. That shouldn't hold the rest of the industry back. We'd still be on 5 year certs by that logic.
Still, oppressive states or hacked ISPs can perform these attacks on small scales (e.g. individual orgs/households) and go undetected.
For a technology the whole world depends on for secure communication, we shouldn't wait until we detect instances of this happening. Taking action to make these attacks harder, more expensive, and shorter lasting is being forward thinking.
Certificate transparency and Multi-Perspective Issuance Corroboration are examples of innovations without bothering people.
Problem is, the benefits of these improvements are limited if attackers can keep using the stolen keys or misissued certificates for 5 years (plus potentially whatever the DCV reuse limit is).
Next time a DigiNotar, Debian weak keys, or heartbleed -like event happens, we'll be glad that these certs exit the ecosystem sooner rather than later.
I'm sure you have legit reasons to feel strongly about the topic and also that you have substantive points to make, but if you want to make them on HN, please make them thoughtfully. Your argument will be more convincing then, too, so it's in your interests to do so.
The ballot is nothing but expected
The whole industry has been moving in this direction for the last decade
So there is nothing much to say
Except that if you waited the last moment, well you will have to be in a hurry. (non)Actions have consequences :)
I'm glad by this decision because that'll hammer a bit down those resisting, those who but a human do perform yearly renewal. Let's how stupid it can get.
Are the security benefits really worth making anything with a valid TLS certificate stop working if it is air-gapped or offline for 48 days?
> CAs and certificate consumers (browsers) voted in favour of this change. They didn't do this because they're incompetent but because they think it'll improve security.
They're not incompetent and they're not "evil", and this change does improve some things. But the companies behind the top level CA ecosystem have their own interests which might not always align with those of end users.
CAs have now implemented MPIC. This may have thwarted some attacks, but those attackers still have valid certificates today and can request a new certificate without any domain control validation being performed in over a year.
BGP hijackings have been uncovered in the last 5 years and MPIC does make this more difficult. https://en.wikipedia.org/wiki/BGP_hijacking
New security standards should come into effect much faster. For fixes against attacks we know about today and new ones that are discovered and mitigated in the future.
CAs used to be able to use WHOIS for DCV. The fact that this option was taken away from everyone is good. It's the same with this change, and you have plenty of time to prepare for it.
I thought we had CT for this.
> CAs used to be able to use WHOIS for DCV. The fact that this option was taken away from everyone is good.
Fair.
> It's the same with this change, and you have plenty of time to prepare for it.
Not so sure on this one, I think it's basically a result of a security "purity spiral". Yes, it will achieve better certificate hygiene, but it will also create a lot of security busywork that could be better spent in other parts of the ecosystem that have much worse problems. The decision to make something opt-in mandatory forcibly allocates other people's labour.
--
The maximum cert lifetime will gradually go down. The CA/B forum could adjust the timeline if big challenges are uncovered.
I doubt they expect this to be necessary. I suspect that companies will discover that automation is already possible for their systems and that new solutions will be developed for most remaining gaps, in part because of this announced timeline.
This will save people time in the long run. It is forced upon you, and that's frustrating, but you do have nearly a year before the first change. It's not going down to 47 days in one go.
I'm not saying that no one will renew certificates manually every month. I do think it'll be rare, and even more rare for there to be a technical reason for it.
"The goal is to minimize risks from outdated certificate data, deprecated cryptographic algorithms, and prolonged exposure to compromised credentials. It also encourages companies and developers to utilize automation to renew and rotate TLS certificates, making it less likely that sites will be running on expired certificates."
I'm not even sure what "outdated certificate data" could be. The browser by default won't negotiate a connection with an expired certificate
Agree.
> According to the article:
Thanks, I did read that, it's not quite what I meant though. Suppose a security engineer at your company proposes that users should change their passwords every 49 days to "minimise prolonged exposure from compromised credentials" and encourage the uptake of password managers and passkeys.
How to respond to that? It seems a noble endeavour. To prioritise, you would want to know (at least):
a) What are the benefits - not mom & apple pie and the virtues of purity but as brass tacks - e.g: how many account compromises do you believe would be prevented by this change and what is the annual cost of those? How is that trending?
b) What are the cons? What's going to be the impact of this change on our customers? How will this affect our support costs? User retention?
I think I would have a harder time trying to justify the cert lifetime proposal than the "ridiculously frequent password changes" proposal. Sure, it's more hygenic but I can't easily point to any major compromises in the past 5 years that would have been prevented by shorter certificate lifetimes. Whereas I could at least handwave in the direction of users who got "password stuffed" to justify ridiculously frequent password changes.
The analogy breaks down in a bad way when it comes to evaluating the cons. The groups proposing to decrease cert lifetimes bear nearly none of the costs of the proposal, for them it is externalised. They also have little to no interest in use cases that don't involve "big cloud" because those don't make them any money.
In the case of OV/EV certificates, it could also include the organisation's legal name, country/locality, registration number, etc.
Forcing people to change passwords increases the likelihood that they pick simpler, algorithmic password so they can remember them more easily, reducing security. That's not an issue with certificates/private keys.
Shorter lifetimes on certs is a net benefit. 47 days seems like a reasonable balance between not having bad certs stick around for too long and having enough time to fix issues when you detect that automatic renewal fails.
The fact that it encourages people to prioritise implementing automated renewals is also a good thing, but I understand that it's frustrating for those with bad software/hardware vendors.
Or certificates which were revoked
No, they did it because it reduces their legal exposure. Nothing more, nothing less.
The goal is to reduce the rotation time low enough that the certificates will rotate before legal procedures to stop them from rotating them can kick in.
This does very little to improve security.
Lower the lifetime of certs does mean that orgs will be better prepared to replace bad certs when they occur. That's a good thing.
More organisations will now take the time to configure ACME clients instead of trying to convince CA's that they're too special to have their certs revoked, or even start embarrassing court cases, which has only happened once as far as I know.
Theories that involve CAs, Google, Microsoft, Apple, and Mozilla having ulterior motives and not considering potential downsides of this change are silly.
And also, it probably won't avoid problems. Because yes, the goal is automation and a couple weeks ago I was trying to access a site from an extremely large infrastructure security company which rotates their certificates every 24 hours. And their site was broke and the subreddit about their company was all complaining about it. Turns out automated daily rotation just means 365 more opportunities for breakage a year.
Even regular processes break, and now we're multiplying the breaking points... and again, at no real security benefit. There’s like... never ever been a case where a certificate leak caused a breach.
This is fundamentally a skill issue. If a human can replace the certificate, so can a machine. Write a script.
Now? It's a spaghetti of politics and emotional warfare. Grown adults who can't handle being told that they might not be up to the task and it's time to part ways. If that's the honest truth, it's not "mean," just not what that person would like to hear.
47 days might seem like an arbitrary number, but it’s a simple cascade:
* 200 days = 6 maximal month (184 days) + 1/2 30-day month (15 days) + 1 day wiggle room
* 100 days = 3 maximal month (92 days) + ~1/4 30-day month (7 days) + 1 day wiggle room
* 47 days = 1 maximal month (31 days) + 1/2 30-day month (15 days) + 1 day wiggle room
Keep up the good work! ;-)
perverse incentives indeed.