Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

6 years seems like an incredibly long time to me. It's a bit of a shame, but looks like industry really didn't come out to support the idea.

2 years on the other hand seems incredibly short? Am I wrong?



6 years is way to short for some use-cases, like phones

sadly it's not rare that some phone hardware proprietary drivers are basically written once and then hardly maintained and in turn only work for a small number of Linux kernel versions.

so for some hardware it might mean that when it gets released you only have noticeable less then 6 years of kernel support

then between starting to build a phone and releasing it 2 years might easily pass

so that means from release you can provide _at most_ 4 years of kernel security patches etc.

but dates tend to not align that grate so maybe it's just 3 years

but then you sell your phone for more then one year, right?

in which case less then 2 years can be between the customer buying your product and you no longer providing kernel updates

that is a huge issue

I mean think about it if someone is a bit thigh on money and buy a slightly older phone they probably aren't aware that their phone stops getting security updates in a year or so (3 years software support since release but it's a 2 year old phone) at which point using it is a liability.

EDIT: The answer is in my opinion not to have even longer LTS but to proper maintain drivers and in turn being able to do full kernel updates


That's one of the reasons Purism went out of their way to upstream all drivers for the Librem 5. The distribution can upgrade kernels pretty much whenever it wants.

The downside can be painful though: sourcing components with such properties is hard. You basically have to cherry-pick them from all over the world because they're so few and far between.

That's one of the reasons why the Librem 5 is so thick and consumes so much energy.


i dont really get how this continues.

vendor A sells a part with a driver in the mainline kernel.

vendor B doesnt, so on top of the part you have to spend time bodging together an untested driver into a custom kernel.

as a buyer why would you go for the second?


Contributing a driver to mainline Linux takes significant time and effort up front. You can't just throw anything over the Linux fence and expect that already-overworked kernel maintainers keep tending for it for the next decades.

Slapping together a half-working out-of-tree kernel module and calling it a day is not only much cheaper; it also buys you the time you need to write the new driver for next year's hot shit SoC that smartphone vendors demand.


Precisely.

What would you want as a buyer. A driver that has already demonstrated that it is good enough to be included in the kernel, or one of unknown quality that may need extra work to integrate with the kernel.

I get why suppliers don't want to do the work. I just don't understand why there isn't enough value add for buyers to justify the premium for the benefits of a mainline driver, and/or why sellers don't try and capture that premium


I don't think buyers are actually going to pay enough for the sellers to justify the added cost. Remember that the buyers have to pass their costs on to their end customers (e.g. consumer phone purchasers), and those people won't accept all phones becoming $50 more expensive or whatever.

Also consider the cultural context. The culture of hardware manufacturers is much different than that of software vendors. They don't view software as a product, but more a necessary evil to make their hardware work with existing software infrastructure. They want to spend as little time on it as possible and then move onto the next thing.

I'm not endorsing this status quo, merely trying to explain it.


The way it seems to me is that a driver takes X hours to make, integrate, etc. It's cheaper for the vendor to spend those X hours, rather than each individual purchaser each spending those X hours.


The easy answer is that buyers largely don't care. Most people get their phones from their ISP provider, so that's the main target. They get a data plan that comes bundled with a phone and pay it off for 2 years. After 2 years they get a new plan with a new phone.

Caring about long-term maintenance isn't what most buyers do. Going SIM-only on your data plan is out of the ordinary.

Also in my experience people largely pick their phones based on the surface level hardware rather than the long-term reliability. Hence why Apple keeps putting fancier cameras into every iPhone even though I'm pretty sure a good chunk of customers don't need a fancy camera. Heck, just getting a phone that fits in my hand was a struggle because buyers somehow got convinced that bigger phone = better phone and now most smartphones on the market are just half-size tablets.

That trend at least seems to be somewhat reversing though.


The trend is sadly not reversing fast enough. Apple already discontinued their line of slightly too big phones (mini series), and now they only sell oversized phablets. I might not have viable iOS-based hardware options when I upgrade in 2-3 years, and I'm not comfortable switching to an operating system made by an adtech company. I do hope they go back to smaller sizes before then. Kind of baffling to me how Apple otherwise puts a lot of effort into accessibility, but their main line of phones are awkward and uncomfortable to hold even for a fully able-bodied person with average size hands.


I think by “buyer” they mean the phone manufacturer buying parts


I agree, but consider that the buyer must also consider what the end-customer cares about. The buyer is not going to pay the chip manufacturer extra for mainlined (or at least open source) drivers unless their end-customers are asking for that (since those costs will be passed on to the customer). And outside of niche products like Librem's, the vast majority of customers don't even know about chipset drivers, let alone care.


Sadly, far too often, software support simply nevers enter the picture in sourcing decisions. Back when I was privy to this process at an OEM, the only factors that mattered were:

1. Hit to the BOM (i.e. cost); and chip

2. Suppliability (i.e., can we get enough pieces, by the time we need them, preferably from fewer suppliers).

In the product I was involved in building (full OS, from bootloader to apps), I was lucky that the hardware team (separate company) was willing to base their decisions on my inputs. The hardware company would bear the full brunt of BOM costs, but without software the hardware was DOA and wouldn't even go to manufacturing. This symbiotic relationship, I think, is what made it necessary for them to listen to our inputs.

Even so, I agreed software support wasn't a super strong input because:

1. There's more room for both compromises and making up for compromises, in software; and

2. Estimating level of software support and quality is more nuanced than just a "Has mainline drivers?" checkbox.

For example, RPi 3B vs. Freescale iMX6. The latter had complete mainline support (for our needs) but the former was still out-of-tree for major subsystems. The RPi was cheaper. A lot cheaper.

I okayed RPi for our base board because:

1. Its out-of-tree kernel was kept up-to-date with mainline with a small delay, and would have supported the next LTS kernel by the time our development was expected to finish (a year);

2. Its out-of-tree code was quite easy (almost straightforward) to integrate into the Gentoo-based stack I wanted to build the OS on; and

3. I was already up-and-running with a prototype on RPi with ArchLinuxARM while we were waiting for iMX6 devkits to be sourced. If ArchLinuxARM could support this board natively, I figured it wouldn't be hard to port it to Gentoo; turned out Gentoo already had built-in support for its out-of-tree code.

Of course, not every sourcing decision was as easy as that. I did have to write a driver for an audio chip because its mainline driver did not support the full range of features the hardware did. But even in that case, the decision to go ahead with that chip was only made after I was certain that we could write and maintain said driver.


Yup, exactly. I last worked in this field in 2009, and BOM cost (tempered with component availability) was king. This was also a time when hardware was much less capable, so they usually ran something like vxWorks (or, ::shudder::, uClinux). Building the cheapest product that could get to market fastest (so as to beat competitors to the latest WiFi draft standard) was all that mattered.

Your Raspberry Pi example is IMO even more illustrative than you let on. I'll reiterate that even that platform is not open and doesn't have a full set of mainlined drivers, after a decade of incredibly active development, by a team that is much more dedicated to openness than most other device manufacturers. Granted, they picked a base (ugh, Broadcom) that is among the worst when it comes to documentation and open source, but I think that also proves a point: device manufacturers don't have a ton of choice, and need to strike a balance between openness and practical considerations. The Raspberry Pi folks had price and capability targets to go with their openness needs, and they couldn't always get everything they wanted.


Because you don't have much choice, and each choice has trade offs. If you pick the part from vendor A, you get the mainlined driver, but maybe you get slower performance, or higher power consumption, or a larger component footprint that doesn't work with your form factor.

And most vendors are like vendor B because they're leading the pack in terms of performance, power consumption, and die size (among other things) and have the market power to avoid having to do everything their customers want them to do.

Still, some headway has been made: Google and Samsung have been gradually getting some manufacturers (mainly Qualcomm) to support their chips for longer. It's been a slow process, though.

As for mainlining: it's a long, difficult process, and the vendor-B types just don't care, and mostly don't need to care.


Because the buyers are consumer hardware companies. This means a) there's an expectation that software works just like their hardware: they put it together once and then throw it onto the market. Updating or supporting it is not a particular consideration, unless they re-engineer something significantly to reduce costs. and b) the bean-counters and hardware engineers have more sway than the software engineers: lower cost, better battery life, features, etc on paper will win out over good software support over the life of the product.


because you don't care to give the customer longer term software support

many consumers are not aware about the danger a unmaintained/non-updatable software stack introduces or that their (mainly) phone is unmaintained

so phone vendor buys from B because A is often just not an option (not available for the hardware you need) and then dumps the problem subtle and mostly unnoticeable on the user

there are some exceptions, e.g. Fairphone is committed to quite long term software support so they try to use vendor As or vendor Bs which have contractual long term commitment for driver maintaince

but in the space of phones (and implicit IoT using phone parts) sadly sometimes (often) the only available option for the hardware you need is vendor B where any long term diver maintenance commitment contracts are just not affordable if you are not operating on a scale of a larger phone vendor

E.g. as far as I remember Fairphone had to do some reserve engineering/patching to continue support for the FP3 until today (and well I think another 2 or so years), and I vaguely remember that they where somewhat lucky that some open source driver work for some parts was already ongoing and getting some support with some of the vendors. For the FP5 they manage to have a more close cooperation with Qualcomm allowing them to provide a 5 year extended warranty and target software support for 8 years (since release of phone).

So without phone producer either being legally forced to have a certain amount of software support (e.g. 3 years after last first party selling) or at least be largely visible transparent about the amount of software support they do provide upfront and also inform their user when the software isn't supported anymore I don't expect to see any larger industry wide changes there.

Through some countries are considering laws like that.


Not a lot of software requires a bleeding edge kernel. If vendor B has a superior chip at a viable price it makes sense to go with them.


> so for some hardware it might mean that when it gets released you only have noticeable less then 6 years of kernel support

Or they could just upgrade the kernel to a newer version. There's no rule that says that the phone needs to run the same major kernel version for its entire lifetime. The issue is that if you buy a sub €100 phone, how exactly is the manufacturer supposes to finance the development and testing of never versions of the operating system? It might be cheap enough to just apply security fixes to an LTS kernel, but moving and re-validating drivers for hardware that may not even be manufactured quickly becomes unjustifiable expensive for anything but flagship phones.


they often can't

proprietary drivers for phone parts are often not updated to support newer kernels


That's the point: these drivers should get updated. Obviously the low-level component manufacturers don't want to do this, but perhaps we need to find a way to incentivize them to do so. And if that fails, to legally force them.


> sadly it's not rare that some phone hardware proprietary drivers are basically written once and then hardly maintained and in turn only work for a small number of Linux kernel versions.

These manufacturers should be punished by the lack of LTS and need to upgrade precisely because of that lazyness and incompetence.


Why not blame the kernel developers for being lazy and incompetent for not offering a stable API for driver developers to use?

You don't see Windows driver developers having their drivers broken by updates every few months.


> You don't see Windows driver developers having their drivers broken by updates every few months.

At the cost of Windows kernel development being a huge PITA because effectively everything in the driver development kit becomes an ossified API that can't ever change, no matter if there are bugs or there are more efficient ways to get something done.

The Linux kernel developers can do whatever they want to get the best (most performant, most energy saving, ...) system because they don't need to worry about breaking someone else's proprietary code. Device manufacturers can always do the right thing and provide well-written modules to upstream - but many don't because (rightfully) the Linux kernel team demands good code quality which is expensive AF. Just look at the state of most non-Pixel/Samsung code dumps, if you're dedicated enough you'll find tons of vulnerabilities and code smell.


>no matter if there are bugs or there are more efficient ways to get something done.

Stability is worth it. After 30 years of development the kernel developers should be able to come up with a solid design for a stable api for drivers that they don't expect to radically change in a way they can't support.


Stability is worth it to you. Others can hold different opinions and make different decisions, and until and unless you -- or someone like minded -- becomes the leader of a major open source kernel project used in billions of devices, the opinions of those others will rule the day.


All developers love stability of the platform they are building on. Good platforms recognize this.


Because the kernel developers are not beholden to chipset manufacturers who want to spend the shortest possible time writing a close-source driver and then forgetting about it. They're there to work on whatever they enjoy, as well as whatever their (paying) stakeholders care about.

The solution to all this is pretty simple: release the source to these drivers. I guarantee members of the community -- or, hell, companies who rely on these drivers in their end-products -- will maintain the more popular/generally-useful ones, and will update them to work with newer kernels.

Certainly the ideal would be to mainline these drivers in the first place, but that's a long, difficult process and I frankly don't blame the chipset manufacturers for not caring to go through it.

Also, real classy to call the people who design and build the kernel that runs on billions of devices around the world "lazy and incompetent". Methinks you just don't know what you're talking about.


Chipset providers are not interested in showing off their trade secrets to the entire world.

>Also, real classy to call the people who design and build the kernel that runs on billions of devices around the world "lazy and incompetent"

I never did that. The parent comment did call manufacters that and I suggested that the kernel developers are at some fault.


It's less kernel developers then a certain subset of companies providing proprietary drivers only.

Most linux kernel changes are limited enough so that updating a driver is not an issue, IFF you have the source code.

That is how a huge number of drivers are maintained in-tree, if they had to do major changes to all the drivers every time anything changes they wouldn't really get anything done.

Only if you don't have the source code is driver breakage an issue.

But Linux approach to proprietary drivers was always that there is no official support when there is no source code.


Why stop at kernel space. You might as well break all of user space every so often. If everything is open source it shouldn't be an issue to fix all broken Linux software right?

People don't want you to break their code.


> You might as well break all of user space every so often. If everything is open source it shouldn't be an issue to fix all broken Linux software right?

What an uninformed take.

The Linux kernel has a strict "don't break userspace" policy, because they know that userspace is not released in lock step with the kernel. Having this policy is certainly a burden on them to get things right, but they've decided the trade offs make it worth it.

They have also chosen that the trade offs involved in having a stable driver API are not worth it.

> People don't want you to break their code.

Then maybe "people" (in this case device manufacturers who write crap drivers) should pony up the money and time to get their drivers mainlined so they don't have to worry about this problem. The Linux kernel team doesn't owe them anything.


>because they know that userspace is not released in lock step with the kernel

They also know out of tree drivers are not released lock step with the kernel.

>They have also chosen that the trade offs involved in having a stable driver API are not worth it.

It sucks for driver developers to not have a stable API regardless if they think its worth it or not.

>should pony up the money and time to get their drivers mainlined so they don't have to worry about this problem.

They don't want to reveal their trade secrets.

>The Linux kernel team doesn't owe them anything.

Which is why Google ended up offering the stable API to driver developers.


It happens all the time with Glibc, Glib, GTK, Qt, etc


Those breaking changes often are years apart which I sn much better than what the kernel currently offers.


> sadly it's not rare that some phone hardware proprietary drivers are basically written once and then hardly maintained and in turn only work for a small number of Linux kernel versions.

The worst part of all of this is: Google could go and mandate that the situation improves by using the Google Play Store license - only grant it if the full source code for the BSP is made available and the manufacturer commits to upstreaming the drivers to the Linux kernel. But they haven't, and so the SoC vendors don't feel any pressure to move to sustainable development models.


Google realistically can't do this. "The SoC vendors" is basically Qualcomm (yes, I know there are others, but if Qualcomm doesn't play ball, none of it matters).

Google has tried to improve the situation, and has made some headway: that's why they were able to get longer support for at least security patches for the Pixel line. Now that they own development of their own SoC, they're able to push that even farther. But consider how that's panned out: they essentially hit a wall with Qualcomm, and had to take ownership of the chipset (based off of Samsung's Exynos chip; they didn't start from scratch) in order to actually get what they want when it comes to long-term support. This should give you an idea of the outsized amount of power Qualcomm has in this situation.

Not many companies have the resources to do what Google is doing here! Even Samsung, who designed their own chipset, still uses Qualcomm for a lot of their products, because building a high-performance SoC with good power consumption numbers is really hard. Good luck to most/all of the smaller Android manufacturers who want more control over their hardware.

(Granted, I'm sure Google didn't decide to build Tensor on their own solely because of the long-term support issues; I bet there were other considerations too.)


After your message I started to think about LTS with 6-years support but every 3 years a new LTS.


6 years may seem like a long time, but check out what the competition is doing. Oracle is supporting Solaris 10 for 20 years, 11.4 for 16 years (23 years if you lump it in with 11.0). HP-UX 11i versions seem to get around 15 years of support.

It really depends on what you're doing, a lot of industries may not need such long-term support. 6 years seems like a happy medium to me, but then again I'm not the one supporting it. I expect the kernel devs would be singing a different tune if people were willing to pay for that extended support.

https://upload.wikimedia.org/wikipedia/en/timeline/276jjn0uo...


Are those really competition anymore?

They're just legacy now IMO and their long term support requirements are a result of this, companies that haven't gotten rid of them by now aren't likely to do it any time soon.

I hate seeing them go. I wasn't such a fan of Solaris but I was of HP-UX. But its days are over. It doesn't even run on x86 or x64 and HP has been paying Intel huge money to keep itanium on life support, which is running out now if it hasn't already.

At least Solaris had an Intel port but it too is very rare now.


Theres a lot of people who haven't got rid of old Linux systems these days too. RHEL 6 from 2010 is still eligible for extended support.


There's still a decent population of RHEL 5 systems in the wild. Last year I was offered an engagement (turned down for a few reasons) to help a company upgrade several hundred systems from RHEL 5 to RHEL 6 and start planning for a future rollout of RHEL 7.

Outside of tech focused companies, 10+ year old systems really are the norm.


> Outside of tech focused companies, 10+ year old systems really are the norm.

It's because outside of tech companies, nobody cares about new features. They care about things continuing to work. Companies don't buy software for the fun of exploring new versions, especially frustratingly pointless cosmetic changes that keep hitting their training budgets.

Many companies would be happy with RHEL5 or Windows XP today from a feature standpoint, if it weren't a security vulnerability.


The problem about "things continuing to work" is really that many security fixes require updated architecture too. This is really why it's so hard to do LTS. It's not only about wanting new features.


At megacorp (years ago) we were transitioning to CentOS 7 (from 6) and just starting to wind down our 32-bit windows stuff in AWS. I'm sure there are plenty of legacy Linux systems out there, but I wonder how many folks are actually paying for them.

CentOS/RHEL 6 was already pretty long in the tooth, but being the contrarian I am, I was not looking forward to the impending systemd nonsense.


> At megacorp (years ago) we were transitioning to CentOS 7 (from 6)...

Today at work, we finally got the OK to stop supporting CentOS 7, for new releases.


It’s nightmare for developers if you get stuck with infrastructure on such dinosaurs and need to deploy a fresh new project. Anything made in the last 3-5 years likely won’t build due to at least openssl even if you get it to otherwise compile. Docker may not run. Postgres may not run. Go binaries? Yeah those also have issues. It’s like putting yourself into a time capsule with unbreakable windows - you can see how much progress has been made and how much easier your life could’ve been, but you’re stuck here.

Old systems are stable, but there’s a fine line between that and stagnation. Tread carefully.


That is common workday in enterprise consulting.

Most of our .NET workloads are still for .NET Framework, and only now we are starting to have Java 17 for new projects, and only thanks to projects like Spring pushing for it.

Ah, and C++ will most likely be a mix of C++14 and C++17, for new projects.


What feature did they need in el7 that wasn't there in el9 ? What was their logic ?


That's 2 years of the upstream LTS kernel. I would expect that major Linux distributions such as Redhat RHEL and Canonical's Ubuntu would continue to do their extended patch cycles against one of the upstream snapshots as they have done in the past. I think 2 years for upstream LTS is probably fine if the vendor patching methodology remains true. This also assumes that the usage of smaller distributions such as Alpine are more commonly used in very agile environments such as K8's, Docker Swarm, etc... Perhaps that is a big assumption on my part.


Depends on where the computer is at, I guess. On a desk, 6 years is a pretty long time. In an industrial setting, 6 years is not very long of a lifecycle.


consider that it's 6 years after release of the kernel version

so likely <5 years since release of the hardware in the US

likely <4 years since release of hardware outside of the US

likely <3 years since you bought the hardware

and if you buy older phones having only a year or so of proper security updates is not that unlikely

So for phones you would need more something like a 8 or 10 year LTS, or well, proper driver updates for proprietary hardware. In which case 2 years can be just fine because in general/normally only drivers are affected by kernel updates.


All comes to cycle. When do you enter that 6 year LTS? Is there new LTS every year or every other year? If you enter 2 years in or even 4 years in. How much have you support left?

Do you jump LTS releases. So one you are on is ending and there is brand new available? Or do you go to one before and have possibly only 2 or 4 years left...


What kind of breaking change would take longer than 2 years to deal with? The reality is that people wait out the entire 6 year period and then do the required months work at the end. If you make the support period 2 years they will just start working on it sooner.


Many perhaps most projects don’t last 6 years, thus punting can save people a great deal of time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: