Plus it inevitably happens that teams will need something only available on newer platforms. With LTS OSes, this becomes an enormous PITA. Someone needs a newer version of some thing, that needs newer versions of system libraries, that turns out to use a new kernel feature, and now you’re in the hell of trying to figure all that out.
The only solution I have ever found is: upgrade everything constantly. Yes, it consumes time. But that time will get consumed anyway at some point down the line, and you’re better off doing it now. It’ll take less time and you’ll be able to deliver new feature your teams are asking for.
The only time I’d ever consider an LTS anything at this point is for early-stage startups that just cannot pay that cost.
At least with k8s, this approach is often sadly somewhat tough.
Some people appreciate having the option to pick and choose when the transition happens though.
> This bug was fixed in 2002, then regressed in 2005 and fixed again in 2007 in mainline, but in RHEL, they botched the '02 and '05 back port, so '02 breaks it, but '05 doesn't. Now, let me tell you about secondary effects of patches in unrelated subsystems. We have three interesting "type A" examples introduced in ...
> ... and that's why it's completely acceptable to run half the fortune 500 off of Linux 2.6.19-RHEL.... in 2025.
> What, you're running 2.6.20? What a hoot!!! We like to kid. You're not serious? You are? You work at the local nuclear plant?
> backs out of conference room slowly, then bolts down the hallway screaming "run for your lives!" once they have clear view of the elevator
The local nuclear power plant is hopefully not connected to the internet. It shouldn't be surprising that such systems run old software.
HN tends to think a lot about what's good for what's good practice for hyperscalers with massive profit margins and capital expenditures, and not as much about what's good for industries where margins are thin and downtimes have massive real-world consequences.
What kind of software architecture do you suggest for, say, the embedded OS on a bus-sized, $200 million ASML EUV lithography tool? Do you really think it's a great idea to pull every update without recertification to the control systems of that nuclear power plant, or the system that renders MRIs at the radiologist?
I'm not saying let them rot for decades, but caution is prudent sometimes.
I don’t think those audiences make up a significant mass of HN readership, so my comments aren’t targeted at them. For your SaaS company with 20 services and growing? You will have less pain from constantly upgrading than you will from adopting LTS releases.
It depends how badly you fucked up in the first place really.
For my part, every time I have done a major OS distribution upgrade, 75% of my time was spent fixing all of the things that were deployed into a highly fragile state to begin with, 25% direct upgrade-related issues (dependencies, config, behavior changes), and 5% actually performing the upgrade.
Once you’re more than one major version back, you risk hitting all kinds of issues.
You're touching on the deeper truth here. Ultimately you are forced to touch production all the time for at least security updates. The question is whether you have people on staff who know the system, at all levels of the stack. If you do, then there is only pain by taking the LTS route, because you will have a fundamental divide between the people who need the latest features and the people who know the specific LTS release.
But if you have people who treat the lower level like a black box, and nobody on staff knows how it works, then LTS helps you derisk this black box that nobody knows by restricting the updates to security updates only and nothing that is supposed to break existing behavior. Then, when upgrading LTS releases, you treat the LTS upgrade as a project in and of itself (full QA, the whole nine yards) instead of some day-to-day maintenance running on autopilot.
Most SaaS stacks run by startups / grow-ups should have people familiar with every level of the stack that they're building on top of. If you don't, build on top of a managed service instead, and be familiar with the managed service's APIs instead.
Business software moves much slower than consumer software, i expect that LTS ends up sucking less when they can control the userspace around their application.
I’ve historically tracked n-1 or n-2 of k8s, and it is not infrequent that we can’t take a helm chart update until we update k8s.
I haven't had a chance to look at Wiz, I would assume they would make their own kernel module that inspects things based on a very slim ABI requirement.
Their customer base (according to their page) is fortune 500 companies, most of those only move off unsupported releases when they are forced to, it is unlikely they use new syscalls.
That being said, the one time I've had critical bootloader broken to the point of looking at GRUB code was a simple `apt dist-upgrade` on a LTS release of Ubuntu 22.04, so there's that.
These days I think my preference is for immutable core + userspace package management or going all in on something like NixOS. I'd really love to see a distro that takes this approach to its logical conclusion using OverlayFS or something (but strictly not containers)
Personally I go for having a recurring calendar entry every few months that blocks off a few days to very carefully read all the patch notes before updating
Yes, not a company, but I can imagine many more scenarios like this.
So, I hear them about "but I wanna run 1.32 on my PBX for-evvvvvah" but I don't think that choice is going to do what they expect. Hell, I'd gravely wonder if containerd would survive a 12 year old release cycle
1: chasing the API docs, one can see the 12 year old control plane managed three resource types: Pods, ReplicationControllers, and Services https://web.archive.org/web/20141013032652/http://cdn.rawgit...
My gut reaction is that this is a bad idea. Anyone going this route should instead invest in creating a process to regularly upgrade kubernetes and run it anywhere as opposed to being tied down to canonical. I know the judgement of a lot of great infra folks will be overridden by business folks in the name of stability though… sigh.
Upgrading takes work. Upgrades break things. Upgrades make training required constantly.
At some point, you get to thinking "we've got better things to do than upgrade kubernetes", lots of businesses do. It's the kind of software that can be "done".
If you've got something that works, you can indeed be happy with keeping it the same for a dozen years.
Here is a stylized post of what your experience is going to be when you ask for support from Canonical for Kubernetes in microk8s: https://github.com/canonical/microk8s/issues/3227#issuecomme...
They are really talented guys but microk8s is a fiasco.
you don't.
12-year lts means that over 12 years you get security fixes.
after 12 years you don't upgrade, you do a full reinstall/replatform.
Not to mention all the shops shipping appliances that run on top of Kubernetes, like Chick-fil-A running a Kubernetes cluster in every restaurant.
I have no doubt there's a market here.
This is separate from the problem of needing to gradually roll pods (and their disks) to new nodes for every node in the fleet to accommodate a Kubernetes version rollout (i.e. with immutable nodes), which is an incredible pain for large database fleets and more or less not realistic to carry out quickly for critical security fixes. Having LTS nodes with LTS kubelet that you restart in place is much easier.
If you go with a LTS version, you rely on the vendor to backport security patches, block your engineers from exploring new features, and if you use 3rd party operators/controllers, good luck with getting LTS support from all of them!
In enterprise market, people have been dealing with weird proprietary software for decades, finally there's a decent open source software with sane version skew and deprecation policies, and strives to make upgrades less painful. If you still can't do this upgrade once every year or two, likely you won't be able to do it in 12 years.
I imagine this might make sense if you are a non-tech company hoping to buy a software from a vendor and never worry about it. But in that case, Kubernetes is the wrong abstraction layer for your business.