Linux: A solo developer is attempting to clean up 30 years of mess

the platform can be a lot faster and more efficient - if its source code was lighter.
I guess speed, efficiency and amount of source code depend in part on the size and power of the kernel one wants and of the applications. If a kernel is stripped down to only what it needs to run on particular hardware, and is configured, for example, to be free of an initramfs, it often boots quicker. But booting is so fast these days that it may not make a difference to most users who I think just use stock kernels and initramfss. It has been said that modern speedy hardware has influenced the creation of sloppy rather than efficient code because it loads so fast anyway that the coder doesn't have to be so particular. I've seen that phenomenon, but optimisation facilities in the compilers can counter a bit of that to a degree. I think it's a complex matter. The current coders of the kernel have processes and peer reviews that appear quite strict when you read about them, but there's a historical load of code that may or may not be quite as disciplined as that produced under present supervisory procedures. One needs to read it to know.
 


All code presented for inclusion of the OS is supposed to be vetted by others members of the team.

Maybe so, but there are only two people who sign off on agreements to add changes to the kernel.

Linus Torvalds and Matthew Garrett.

Matthew is also the creator of shim, which allows Linux Distros to handshake with computers already having Windows installed, and allow Linux to be installed.

Wizard
 
At the risk of de-railing the thread, a lot of the recent slowdowns/performance degradation in the Linux kernel is down to all of the patches that were added to avoid numerous, and very serious, hardware level security flaws from the last few years.

Most notably, bugs like Spectre and Meltdown, that were actually CPU design flaws which required major patches to ALL operating system kernels (Windows, Linux, Mac, BSD etc). The patches/mitigations for those particular bugs were computationally expensive and incurred a huge performance hit on all affected platforms.

It’s a testament to the Linux developers that they even managed to quickly come up with a set of sane patches for these particularly nasty hardware bugs.

And it seems pretty insane that operating systems developers would have to fix a problem that was ultimately due to errors on the part of CPU designers/manufacturers. But I guess a factory recall of pretty much every modern CPU in existence would be pretty unrealistic?!

It also goes to show how creative and sophisticated hackers, attackers and security researchers are when it comes to discovering, exploiting and patching these flaws.

The good guys and bad guys aren’t just looking for shoddy code in software, they’re looking for any kinds of exploitable flaws, at any level in the hardware/software stack. Many of these flaws are complex and are not exactly easy to exploit. Only extremely sophisticated and knowledgable attackers could take advantage of them. But the fact that these flaws existed and were exploitable through proof of concept code, meant that something needed to be done quickly, to prevent them from being exploited in the wild.

Getting back on topic, the idea of cleaning up the Kernels many public interfaces has been around for a while. It’s not going to be an easy or quick task. But cleaning things up should make developers and maintainers lives easier in the long run. But it shouldn’t have any negative effects on the performance.
 
Last edited:
It’s not going to be an easy or quick task.

I can share some experience. I'm not a very good programmer and I wrote pretty much all the code (in a hodgepodge of languages - including BASIC) that my business used. By 1995, I already employed a couple of competent programmers. That's when we'd decide to rewrite everything in C (or maybe C## - but I'm pretty sure it was C). This required more competent programmers, maintaining the existing code, and rewriting the new code.

The project ended up taking a few years but the end product was one we could license and install on the client's infrastructure. We finished up in 1998, as memory serves.

Amusingly, today, much of that complex programming is now covered by plugins in the traffic modeling software.
 
In the 5.15.3 source I counted 5615 header files under the /linux-5.15.3/include directory. Although I'm a regular user of some of them when programming, I rarely inspect them and they just work. Given that that was the targeted area of clean up mentioned at the beginning of this thread, it looks like a lot of work for a single soul. As far as coding to amend CPU design problems, it's difficult to imagine that such an activity would disappear, rather that it would rise and fall in concert with the development of CPUs since like most hardware, issues arise. There's hope perhaps that when a CPU dispenses with some of its facilities, or improves its fallibilities, or grows, that the corresponding old code is dispensed with as well as new efficient code written. Code does get withdrawn from the kernel. An example is the withdrawal of the scroll back function on consoles mentioned here:
Briefly, that functionality became regarded as less useful and there was no one to maintain it according to Linus. The inference however was that if someone was to put up their hand to maintain it, that it could return. Of course this is a very tiny impact on kernel size and code bloat and dependency problems but it does demonstrate some oversight on these matters.
 

Members online


Top