Managing bloat using container like environments.

Debian_SuperUser

Active Member
Joined
Mar 18, 2024
Messages
126
Reaction score
31
Credits
1,438
I recently had to build a large project on Windows (don't worry, its not a Windows or Linux rant or comparison again). For the dependencies, it had to install so many of them, and it completely bloated and messed up my system. That was the time I learnt about docker, and immediately realized how cool and useful this thing is.

I am a guy who loves performance and neatness. I really do hate any kind of bloat. Containers are a good way to manage bloat, but containers themselves are not very reliable, not to mention that they themselves are really heavy, but at least managing bloat is easy.

If I really l like containers, then you would say that I should go with atomic desktops and use snaps and flatpaks, but no. I favor performance over everything.

If I were to make an operating system (no jokes, hypothetical only), I would create a containerized system of installing packages, where programs have their own workspace where they can do whatever the hell they want, and permission would be needed to read or write to other places, and those locations will be recorded to cleanly uninstall the program in the future. So notice that you don't actually need a full blown container, but just an isolated place for programs.

I don't have any idea how I can setup something similar on Linux. Also, I am going Arch soon. Does somebody have any such ideas to accomplish this?

Actually, such programs on Windows exist where they briefly track what files are written where by programs, and when uninstalling them, use this database to fully uninstall them. Does such a program exist on Linux? That would not be so reliable that having such a containerized environment as I mentioned above, but I can't have everything.

Also, this is a MUCH bigger problem on Windows. Like 100x. Windows itself creates a lot more bloat than what programs would. On Linux though, it is much calmer. I wouldn't lose my mind as I would lose it on Windows if I am not able to achieve what I mentioned, but just imagine using your Linux system actively for a year and it is still almost as clean as the first install. Would love that.
 


If I were to make an operating system (no jokes, hypothetical only), I would create a containerized system of installing packages, where programs have their own workspace where they can do whatever the hell they want, and permission would be needed to read or write to other places, and those locations will be recorded to cleanly uninstall the program in the future. So notice that you don't actually need a full blown container, but just an isolated place for programs.

flatpak and appImage already do this at an application level, podman, docker and kubernetes do this at a container level.

The problem with these, yes, now some things are isolated from your base operating system, but I have the bloat of dozens
of little application containers ( usually with duplicate libraries ) filling up my hard drive.

I don't include snap here, because it's not a true sandbox container.
 
I use Fedora Silverblue as my desktop system. I use Flatpaks for all my graphical applications and I layer a few command-line tools and applications, as well as my virtualization setup(Qemu/kvm/libvirt/virtmanager). I game on my system with Steam and Lutris, I don't notice a difference from my user experience when I compare my experience to Arch or NixOS. I find it worth it because I find myself tinkering around a lot less with my system since I moved to Silverblue and it gives me peace of mind since I just use my system instead of playing around a lot of the time which I had with Arch. It's not for everyone though but give it a try I would say.
 
I recently had to build a large project on Windows (don't worry, its not a Windows or Linux rant or comparison again). For the dependencies, it had to install so many of them, and it completely bloated and messed up my system. That was the time I learnt about docker, and immediately realized how cool and useful this thing is.
If I were to make an operating system (no jokes, hypothetical only), I would create a containerized system of installing packages, where programs have their own workspace where they can do whatever the hell they want, and permission would be needed to read or write to other places, and those locations will be recorded to cleanly uninstall the program in the future. So notice that you don't actually need a full blown container, but just an isolated place for programs.
I totally get your point but it appears you're trying to accomplish 2 things that are unrelated to each other:
1. Isolating software that you install
2. Managing build process; build directory and dependencies

When it comes to point 1 it's true that Windows sucks compared to Linux, whenever you install something your Windows system gets bloated and the biggest problem is it's very hard to unbloat it.
The best thing for Windows is to reinstall it and install only what you're 100% sure you'll use.

With Linux there is no such problem because we have package managers, uninstalling (including any "bloat" such as config files) is as easy as to install a package.
So I don't see any reasons to isolate any programs on Linux, for me this is a waste of space and resources.

But for point 2 things are different, if you build a lot then you should have a separate SSD for development (or at least separate partition) which serves only for build process, and then simply install your build output to primary SSD where system is installed.

Benefit of this is that you don't need to remove anything from that SSD but simply reuse build outputs such as libraries for next build.
This drive will be bloated all the time but away from your system and you can clean it as needed without polluting your system drive.

This works for both Windows and Linux, because what you install is ready to use build output leaving behind any build outputs and dependencies.
 
I totally get your point but it appears you're trying to accomplish 2 things that are unrelated to each other:
1. Isolating software that you install
2. Managing build process; build directory and dependencies
No, the building part is not at all in the equation. I mentioned it because that is the reason I learned about containers. And as for a separate partition and all, I would probably only do it if a container is not possible.

With Linux there is no such problem because we have package managers, uninstalling (including any "bloat" such as config files) is as easy as to install a package.
So I don't see any reasons to isolate any programs on Linux, for me this is a waste of space and resources.
I should have mentioned this but uninstalling packages on Linux does almost completely remove that package, but there will be still some left over files. And some of them of are actually meant to not be cleaned up when uninstalling that program, for example user data of Chromium, and it will just stay there until I find it myself. And this problem only grows when you have quite a few programs installed, and worse when you keep installing and uninstalling programs.

Tbh, the bloat part is only part of the equation. With containers, I get more control. I can move them wherever I want, and I can completely remove them if I want to. This type of control everyone likes.

The problem with these, yes, now some things are isolated from your base operating system, but I have the bloat of dozens
of little application containers ( usually with duplicate libraries ) filling up my hard drive.
Yes, duplicate libraries and all are a problem, but the main problem is that they are usually compressed I think (or at least, snaps are). I use the Snap version of Chromium on Ubuntu, and it kind of sucks. It launches a bit slower, and has other sorts of performance issues. Not to mention that the Snap system needed to run these programs are quite diverse, a bit heavy, and so, very delicate. Once I had to delete /usr/src/, and the whole Snap system had issues and literally couldn't launch any snap program. All I had to do is create the /usr/src/ directory (with src being empty), and it fixed it. This would probably never happen with a native installed program, or an AppImage.
But yes, you get the containerization. Once Chromium broke for some reason and I couldn't launch it (another problem caused by snap), and I had to reinstall it. When uninstalling it, it completely removed everything and even my user data, which was precious for me, but it was my fault not backing it up.
 
But for point 2 things are different, if you build a lot then you should have a separate SSD for development (or at least separate partition) which serves only for build process, and then simply install your build output to primary SSD where system is installed.

Benefit of this is that you don't need to remove anything from that SSD but simply reuse build outputs such as libraries for next build.
This drive will be bloated all the time but away from your system and you can clean it as needed without polluting your system drive.

Have you ever gone through the Linux from Scratch, mock and chroot setup? It might work for something like this?
 
I should have mentioned this but uninstalling packages on Linux does almost completely remove that package, but there will be still some left over files. And some of them of are actually meant to not be cleaned up when uninstalling that program, for example user data of Chromium, and it will just stay there until I find it myself. And this problem only grows when you have quite a few programs installed, and worse when you keep installing and uninstalling programs.
That's true, apt purge will remove config files but will not remove various directories which that program used, which might exist at various places and this is annoying.
When I purge something I also visit my hidden directories in /home and also in other system wide places, such as Nvidia folder in /usr/src/nvidia-550.107.02 which is a folder from previous installation that makes no purpose.

Then delete those folders, and this way I'm sure nothing is left, it's manual work that sadly purge should have handle but does not.
But other than this I'm not aware of any other bloat.

Have you ever gone through the Linux from Scratch, mock and chroot setup? It might work for something like this?
No I didn't but if I decided to go then I'll certainly have a separate build drive.
I've learned about this approach from Visual Studio IDE on Windows which since recently allows to create the so called "dev drives"
 
And some of them of are actually meant to not be cleaned up when uninstalling that program, for example user data of Chromium, and it will just stay there until I find it myself. And this problem only grows when you have quite a few programs installed, and worse when you keep installing and uninstalling programs.

I suppose the argument of this would be.. "where is the line between an application, and data?"

Lets, say I have a database installed with millions of lines of data.
Then I remove the database application, but the data remains. Typically this is done by design, specifically for scenarios
such as that. Now perhaps browser caches aren't that important to anyone ( me included ) but technically they aren't
part of the application.
 
That's true, apt purge will remove config files but will not remove various directories which that program used, which might exist at various places and this is annoying.

A while back I deleted some web servers, apache/httpd and nginx, but we had a lot of html/php/javascript code
in the main html directory. Thankfully that didn't get deleted when we removed the applications.

I do know that many applications leave behind config files, directories and other artifacts, but I wonder
what percentage of applications leave artifacts, and what percentage do not leave anything?

I suspect most do not leave anything? I have no metrics to base that claim on.

But we also tend to use .d directories to avoid this. For example httpd will remove my main httpd.conf
but it doesn't remove any custom configs in httpd.conf.d
 
Hello Debian_SuperUser

I use Porteus which is based on Slackware.

Everything is in modular form. All desired modules are decompressed at boot into ram. If I wish to use a program I "activate" the relevant modules and when done "deactivate" them.

If I want to save any changes I make then I just make a module of them. When I download a new program a new module is made. If a module is not available then I make one from the program itself. If I bork the session I just restart.

There is no "install" to break or get bloated. Bullet proof.

Vektor
 
A while back I deleted some web servers, apache/httpd and nginx, but we had a lot of html/php/javascript code
in the main html directory. Thankfully that didn't get deleted when we removed the applications.

I do know that many applications leave behind config files, directories and other artifacts, but I wonder
what percentage of applications leave artifacts, and what percentage do not leave anything?

I suspect most do not leave anything? I have no metrics to base that claim on.

But we also tend to use .d directories to avoid this. For example httpd will remove my main httpd.conf
but it doesn't remove any custom configs in httpd.conf.d
Yes, I see now why not deleting everything makes sense.
But IMO there should be option for total removal, an option in addition to purge would be nice and let the user decide what they want.
 
There is no "install" to break or get bloated. Bullet proof.


It seems some modules still have to downloaded sometimes, but even if they don't.
It would seem that having "everything" on my hard drive, ( compressed or not ) would
still be disk space bloat?
 
Hey dos2unix

Where would "everything" go then?

Vektor
 
Where would "everything" go then?

Typically in a cloud/vendor repo. You only download what you need/want.

This is the way, dnf, apt, yast2, and pacman works.
 
Hey dos2unix

So you mean download, use and then delete?

That would mean only the bare minimum would be needed to boot. Very small storage needed. Sounds cool.

Vektor
 
Hey dos2unix

So for example keep everything on GDrive and use wget or a minimal web browser.

I am going to try that.

Vektor
 
Then delete those folders, and this way I'm sure nothing is left, it's manual work that sadly purge should have handle but does not.
But other than this I'm not aware of any other bloat
Yeah that is pretty much what I am referring to as bloat. Don't like that. Don't like it doing manual either. And so I am hoping to find a way to handle such bloat automatically.

Maybe I can work on a script that runs in background, and uses strace or similar to monitor where which program writes data where. But for that, programs need to be run with strace, so I don't know how I can monitor each program individually by hooking strace to them, but there probably is a way. And even after that, I hope it doesn't eat too many CPU cycles.

mhhmmm, need a better solution.
 
Lets, say I have a database installed with millions of lines of data.
Then I remove the database application, but the data remains. Typically this is done by design, specifically for scenarios
such as that. Now perhaps browser caches aren't that important to anyone ( me included ) but technically they aren't
part of the application.
But when I need to uninstall the program and never want to use it again, then why would I need the keep that data?
 
i just tried to install docker and i got a broken packages error, and it's one of those errors where you launch a command to show the broken packages, and you get nothing: completely circular.

I tried to ask somebody about this before: what exactly is the thing to install within the container? Isn't it just a virtual machine?
 
what exactly is the thing to install within the container? Isn't it just a virtual machine?
I've never used docker but from my understanding docker is to build something in a container just like snaps are for installing software.
And if I'm not wrong you can also run output of the build within docker without having to install it to system drive.
In both cases it's isolation but it is not virtual machine.
 

Staff online

Members online


Top