appimage flatpak ,snap or other

Leonardo_B

Active Member
Joined
Apr 15, 2020
Messages
256
Reaction score
107
Credits
1,889
What format do you like doing when install programs onto your computer and why ?
 


The default repositories and the AUR because they provide all I need.
 
Last edited:
1) Debian repos
2) .deb package
3) Source if possible
4) Flatpaks
5) Appimage
 
Distro's repositories .deb packages.
use a couple of Apimages but they are not installed just run them.
 
Usually I'll look for a program in the Software Manager, and if it's not, then I'll see if there's instructions for download programs through app images (as those are what I'm used to), or Terminal commands. If I needed to download something through a snap or flatpak, I'd be open-minded enough to try it those ways.

While I was typing this answer, I became curious to know if compiling or building programs from source could make downloads standardized and make app images, snaps, and flatpaks obsolete. Is that possible?
 
I generally stick to the more traditional ways - PPA, .deb, straight from the repos of course, and maybe building from source as needed. I may miss out on some software that way, but I don't really worry about it or notice any meaningful difference.
 
ya i agree the .deb or rpm is the way i like to going. But their always programs that are not in the default repositories and u go to the website of the package it normally a Appimage. i remove all the default snaps on my system .
 
While I was typing this answer, I became curious to know if compiling or building programs from source could make downloads standardized and make app images, snaps, and flatpaks obsolete. Is that possible?
Not really!

There are so many different build systems.
Not all software projects use gnu auto tools (configure, make, make install) to build and install software from source.
Some use cmake. QT based applications use qmake, there are other build systems like Apache ant, boost::jam, ninja etc.

And that’s just C/C++ based programs!
Languages like Go, Ruby, Python etc have their own build systems too.

Some software projects support building using multiple build systems, sometimes using different systems for building on different platforms, sometimes even including projects for various IDE’s.
Others only use one build system for everything.

So there are a number of different ways of building software. It can vary depending on the programming language and the platform it’s being built for. And the preferred tool-chain of the developers.

So it would be pretty difficult to create a standard, ‘one size fits all’ solution to building and installing software from source using any programming language.

Personally, when it comes to installing software, I stick to my distro’s repos as far as possible. If I need something newer, I’ll usually build from source from the projects official GitHub/Gitlab (or other source control). That way the build will be configured and optimised to run on my machine. Being a programmer myself, I don’t mind installing whatever I need to build other programs. I can usually fix things if a build fails, or if there are dependency problems that need to be resolved.
Also, once I’ve successfully built and installed the program once - updating just requires me to pull the latest source, build and re-install, so that’s not a bother.

Appimages are also quite convenient, they’re just a single package containing everything you need to run the program. You download it, make it executable and run it. But they are quite large files. I only use them on rare occasions - e.g. if there is an annoying bug in a program that hasn’t been fixed in the Debian repos, but a newer version is available as an appimage. or if I want to make use of new functionality that isn’t in the version in the repos. And I only use them temporarily.

For permanent installs, I will install from source

But I’m not a fan of snapcraft/snaps or flatpak. They seem like overly complex systemd-like solutions to the problem of packaging for all systems.

By systemd-like - I mean that snapcraft and flatpak are huge, sprawling, complex solutions which affect the wider system.

Systemd is the new init-system for Linux based OSes. Rather than being a small, discrete module, it’s a huge, sprawling mess that affects a lot of other parts of the system.

Out of snapcraft and flatpak- snapcraft is the worst IMO. I was an early adopter of snapcraft. Whilst it’s great that applications are sandboxed and isolated for security reasons - snapd adds extra time to your boot times. On my laptop, running Debian - snapd added well over a minute to my boot times. Which is completely unacceptable!

Also many snap applications did not respect the system theme. So they’d often look out of place. And snaps didn’t seem to get updated as often as traditional packages. So that could also have potential security ramifications.

So in the end, I removed it completely from my machine and am not willing to try it again!
[edit]I tried flatpak, but it also had negative affects on the performance of my system.

Initially, I said I hadn’t tried it. But I just found some notes on my laptop which says otherwise!
[/edit]

So for me, the most optimal solutions are installing from the repos, or installing from source.
 
Last edited:
Not really!

There are so many different build systems.
Not all software projects use gnu auto tools (configure, make, make install) to build and install software from source.
Some use cmake. QT based applications use qmake, there are other build systems like Apache ant, boost::jam, ninja etc.

And that’s just C/C++ based programs!
Languages like Go, Ruby, Python etc have their own build systems too.

Some software projects support building using multiple build systems, sometimes using different systems for building on different platforms, sometimes even including projects for various IDE’s.
Others only use one build system for everything.

So there are a number of different ways of building software. It can vary depending on the programming language and the platform it’s being built for. And the preferred tool-chain of the developers.

So it would be pretty difficult to create a standard, ‘one size fits all’ solution to building and installing software from source using any programming language.

Personally, when it comes to installing software, I stick to my distro’s repos as far as possible. If I need something newer, I’ll usually build from source from the projects official GitHub/Gitlab (or other source control). That way the build will be configured and optimised to run on my machine. Being a programmer myself, I don’t mind installing whatever I need to build other programs. I can usually fix things if a build fails, or if there are dependency problems that need to be resolved.
Also, once I’ve successfully built and installed the program once - updating just requires me to pull the latest source, build and re-install, so that’s not a bother.

Appimages are also quite convenient, they’re just a single package containing everything you need to run the program. You download it, make it executable and run it. But they are quite large files. I only use them on rare occasions - e.g. if there is an annoying bug in a program that hasn’t been fixed in the Debian repos, but a newer version is available as an appimage. or if I want to make use of new functionality that isn’t in the version in the repos. And I only use them temporarily.

For permanent installs, I will install from source

But I’m not a fan of snapcraft/snaps or flatpak. They seem like overly complex systemd-like solutions to the problem of packaging for all systems.

By systemd-like - I mean that snapcraft and flatpak are huge, sprawling, complex solutions which affect the wider system.

Systemd is the new init-system for Linux based OSes. Rather than being a small, discrete module, it’s a huge, sprawling mess that affects a lot of other parts of the system.

Out of snapcraft and flatpak- snapcraft is the worst IMO. I was an early adopter of snapcraft. Whilst it’s great that applications are sandboxed and isolated for security reasons - snapd adds extra time to your boot times. On my laptop, running Debian - snapd added well over a minute to my boot times. Which is completely unacceptable!

So in the end, I removed it completely from my machine and am not willing to try it again!
I haven’t tried flatpak yet, but I imagine it will also negatively affect the performance of my system.

So for me, the most optimal solutions are installing from the repos, or installing from source.

At least now I have an answer.

It just goes to show how complex the nature of Linux has gotten, and after reading that, I'm starting to wonder if any attempt to standardize it is a pipe dream at this point.
 
Last edited by a moderator:
At least now I have an answer.

It just goes to show how complex the nature of Linux has gotten, and after reading that, I'm starting to think that any attempt to standardize it is a pipe dream at this point.

Linux distributions follow a number of different standards, it’s heavily standardised. From Posix, to numerous ISO standards, to various standards for hardware and software protocols. Also there is the LSB (Linux standard base).
The standard package manager for the LSB is the red-hat rpm format. But Debian based distributions are technically LSB compliant because Debian users can use a package called alien to convert .rpm packages to .deb.
But I wouldn’t recommend using .rpm packages often in Debian. Mostly because most things are available as .deb packages.
So Linux does follow a bunch of standards.

Also, most Linux distributions are pretty much identical under the hood, apart from a few aesthetic/cosmetic differences.
E.g.
Different display manager, desktop environment, system theme, package management system, different set of default applications etc.

Also there are some ethical differences. Some distros only allow purely free software, others allow proprietary/non-free software to be installed/used.

And there are speciality distros that cater to different types of users, or use-case scenarios too.
E.g. distros without a desktop/GUI, for headless server applications, distros for dedicated servers, or firewalls. Distros for embedded applications, distros for systems with limited resources, distros aimed at using less power.

And of course there are all of the desktop distros geared towards general everyday use. Many are aimed at ordinary desktop users. Others are for pen-testers, artists/musicians, religious people. Even Hannah Montana fans had their own distro once upon a time.

The fact that there are so many different distro’s and respins with different desktops is not a bad thing.It’s a good thing.
It’s an important part of the free software ecosystem.
As with any free software project, we have the freedom to study and modify them and to freely distribute, or even sell our modified versions. (But we must licence them under the same terms as the original project and give appropriate credit to the original)

In the free software ecosystem, the good projects will live on and grow/thrive and their ideas will be adopted by others. And the bad ones will eventually atrophy and die as users move on to other, better things.

If you want a more "standard" experience in Linux - pick a large, popular distro which is used as a base for a number of other distros. Like RHEL/fedora, Debian, Slackware, Gentoo or Arch.

And if you want a more custom experience, you can use one of the above distros as a base and can heavily customise your experience to suit your own needs.

Or to save yourself some effort, you could use one of the many specialised distros that are already available, if there is one that caters towards your needs/requirements and then modify things from there, if there’s anything additional that you need.

If by "standardisation", you instead mean homogenising Linux and creating "one Linux distribution to rule them all" - that’s never going to happen. So many different users of Linux have different needs. There are people pulling Linux in so many different directions to fit their particular needs - as is their right.

But again, underneath the hood - most Linux distros are basically the same. The differences between them are purely cosmetic. Sometimes the differences are for ideological reasons, other times for technical reasons. But at the end of the day, they’re all pretty much the same.

Another thing to bear in mind is that when you are using free software and OSes like Linux or BSD - you are responsible for your own experience.

You can’t complain because:
A. There’s nobody to complain to (unless you paid somebody like red-hat, or Canonical for an expensive support subscription)
B. You have no right to complain. You got a complete operating system for free! (Again, unless you’ve paid somebody for support).

What you get out of free software and Linux/BSD/other free OSes is directly proportional to the amount of effort you put into learning about it. You can’t blame other people for your own ignorance. What you can do is educate yourself. There are tons of tutorials, courses and training materials on the internet and tons of books and ebooks available to buy. So plenty of places to learn from.

Many Linux distros nowadays can be installed, used and administered by virtually anybody, without a lot of knowledge. Many distros will allow you to manage most things from the GUI too. But whenever there are problems - regardless of how you feel about it - the power of the terminal should not be overlooked.

If you have problems, most Linux users will give you some esoteric looking commands to run. The reason for this is because it’s the quickest, most direct way to diagnose and fix most problems. To explain how to do it in the GUI could take several pages of instructions - if there is even a way to do it via the GUI.

So even if you’re a non-technical Linux user, it’s always worth learning a bit about the terminal and some of the many commands available to you. Because unless you’re supremely lucky - eventually, you’re going to need to use the terminal!

So again, the more you learn, the more you’ll get out of using Linux. If you come from Windows, Linux is very different and will require a bit of a learning curve. The layout of the file-system is different, the desktop might be slightly different, different shortcut keys and you’ve got this huge, powerful terminal with a comprehensive set of tools - in stark contrast to the noddy cmd you had in Windows, or it’s slightly more functional brother Powershell.

If you come from a Mac background, and you were familiar with the terminal, it’s not too different. If you were not familiar with the terminal on a Mac - you missed out on a trick and will probably have a bit of a learning curve when it comes to the terminal. But the desktop and everything else will probably make sense because you’ll already be familiar with the Unix style layout of the file system.
But you’ll probably need to learn some new shortcut keys.

And if Linux is your first operating system, then you’re at the start of a huge, fascinating journey!

Users switching from other systems seem to forget that they once had to learn the operating system they just moved from. And the fact that another operating system is different to what they’re used to shouldn’t be a surprise either!
 
Thank you JasKinasis very good reply :)
It amazes me how many come to Linux and want to blame those trying to help them learn for their own mistakes. Or want to blame linux because their hardware vendors do not support linux. Linux has come a very long way over the years since 1992. But the op must be willing to learn a new way of doing things and that seems to be something some are not willing to do.
Cheers!
 
If by "standardization", you instead mean homogenizing Linux and creating "one Linux distribution to rule them all" - that’s never going to happen. So many different users of Linux have different needs. There are people pulling Linux in so many different directions to fit their particular needs - as is their right.

Homogenizing is the term I was looking for, and up until now, I kept thinking in that kind of mindset, which led to a number of arguments with people on here. Now that I've accepted that mindset is not only wrong, but also limiting, my understanding of Linux has changed.
 
What format do you like doing when install programs onto your computer and why ?
In this order:
  1. Fedora official RPM
  2. RPM Fusion RPM -- it comes from the community, but several Fedora maintainers are involved.
  3. Official source's RPM (third party)
  4. Untar to ~/opt or /opt (e.g.: Java SDK which I use a lot)
  5. Compile to my ~/.local -- by compiling to my local user directory, I can ensure that I won´t break any dependent official package and I remove the need to use sudo to install. This is done by using ./configure --prefix=~/.local before make && make install
  6. Flatpack, snap or AppImage, depending on whether the package is provider by the official source, or on what is the most trustworthy source otherwise.
 
Last edited:
Have you tried this? It updates AppImages as well.
No I haven't tried it since I don't use AppImages, I came across it and I thought I would post it here because thought someone else would find it useful and try it ;)
 
Last edited:
For me, almost anything BUT appimages.
RPMs are great, DEBs are good. gzipped' tar files work too. I have compiled a few home grown apps.
(I do the devOps tasks to build in house RPMs where I work now)
I don't hate snaps, but for whatever reason, rarely have reasons to use them.

AppImages, have you seen the size of these? Yeah I know they include all the dependencies,
The good news, they pretty much run on any distro.
The bad news, in order to accomplish this, every file has all the dependencies, so if you have 4 or 5 appImages
you likely have the same dependency libraries 4 or 5 times.

Because of this, I know people who have taken extreme measures and built docker or podman containers
just for the sake of running of running an app they could only get in appImage otherwise.

You laugh, but I'm worse, I tend to compile my own package. :)
 
AppImages, have you seen the size of these? Yeah I know they include all the dependencies,
The good news, they pretty much run on any distro.
The bad news, in order to accomplish this, every file has all the dependencies, so if you have 4 or 5 appImages
you likely have the same dependency libraries 4 or 5 times.

You probably wont want to look at snaps then!

It would be much better if they make these things into a sort of quasi package manager. So that if you have 4 or 5 appimages, it would share the dependencies among them. And if you already had the dependencies installed on the real system, it would use them rather than quasi-installing them again.

If I am understanding how the building of appimages works, you could slim down some appimages if you built them specifically for your own system. That is a big if though.
 

Staff online


Top