I don’t understand what problem they are meant to solve. If you have a FOSS piece of software, you can install it via the package manager. Or the store, which is just a frontend for the package manager. I see that they are distribution-independent, but the distro maintainers likely already know what’s compatible and what your system needs to install the software. You enjoy that benefit only through the package manager.

If your distro ships broken software because of dependency problems, you don’t need a tool like Flatpak, you need a new distro.

  • donuts@kbin.social
    link
    fedilink
    arrow-up
    35
    ·
    1 year ago

    I like Flatpaks and AppImages for application delivery and here’s why:

    1. Software doesn’t just magically appear in various distros’ repositories. There is a considerable amount of work (time/effort/energy/thought) that goes into including and maintaining any given program in a single distro’s repo, and then very similar work must be done by the maintainers of other independent repositories. To make matters worse, some programs are not straight-forward to compile and/or may use customized dependencies. In those cases, package maintainers for each distro will have to do even more work and pay close attention to deliver the application as intended, or risk shipping a version that works differently in subtle ways and possibly with rare bugs. (Needing to ship custom versions of deps for a certain program also totally eliminates a lot of the benefits of shared libraries; namely reduced storage space and shared functionality or security.) That’s part of the reality of managing packages, and the fact is that there’s a lot of wasted effort and repeated work that goes into putting this or that application into a distro repository. I have a ton of respect for distro package maintainers, but I would prefer that their talents and energy could be used on making the user experience and polish of their distro better, or developing new/better software, than wrestling with every new version of every package over and over again multiple times per year.

    2. As a developer it’s very nice to know exactly what is being “shipped” to your users, and that most of your users are running the same code in a very similar environment. In my opinion, it’s simply better for users and developers of a piece of software to have a more direct path, instead of running through a third party middle-man. Developers ship it, users use it, if there’s a bug the users report it, developers fix it and add features and then ship again. It’s simple, it’s effective, and there’s very little reason to add a bunch of extra steps to this process.

    3. The more time I spend using immutable, atomic Linux distros like Silverblue, the more I value a strong separation between system and applications. I want my base system to be solid as a rock, and ideally pretty fucking hard to accidentally break (either on the user end or the distro end). At the same time I also want to be able to use the latest and greatest applications as soon as humanly possible. Well, Silverblue has shown that there’s a viable model to do that in the form of an immutable and atomic base system combined with containerized applications and dev environment. What Silverblue does may not be the only way of achieving a separation between system and applications, but I’ve never been more certain that it’s the right direction for creating a more stable and predictable Linux experience without many compromises. I don’t necessarily want to update my whole system to get the newest version of an application, and I certainly don’t want my system to break due to dependency hell in the process.

    4. The advantages of the old way of distributing applications on Linux are way overblown compared to the advantages of Flatpak. Do flatpaks take up more drive space than traditionally packaged apps? Maybe, I don’t even know. But even if they do, who the hell cares? Linux systems and applications are mostly pretty tiny, and a 1TB nvme ssd is like $50 these days. Does using shared library create less potential for security flaws going unfixed? Possibly, but again, sometimes it just isn’t possible or practical for applications to share libraries, Flatpaks can technically share libraries too, and the containerized nature of Flatpaks mean that security vulnerabilities in specific applications are mitigated somewhat. I’m not a security guy, but I’d guess that Flatpaks are generally pretty safe.

    Well, that’s all I can think of right now. I really like Flatpaks and to some extent AppImages too. I still think that most “system-level” stuff is fine to do with traditional packaging (or something like ostree), but for “application-level” stuff, I think Flatpaks are the current king. They’re very up-to-date, sandboxed, often packaged by the developers themselves, consistent across many distros, save distro maintainers effort that could be better used elsewhere, easy for users to update, integrate with software centers, are very very unlikely to cause your system to break, and so on.

    It would be really hard for me to want to switch back to a traditional distro using only repo packages.

    • koorogi@kbin.social
      link
      fedilink
      arrow-up
      12
      ·
      1 year ago

      I disagree with so much of this.

      You might not care about the extra disk space, network bandwidth, and install time required by having each application package up duplicate copies of all the libraries they depend on, but I sure do. Memory use is also higher, because having separate copies of common libraries means that each copy needs to be loaded into memory separately, and that memory can’t be shared across multiple processes. I also trust my distribution to be on top of security updates much more than I trust every random application developer shipping Flatpaks.

      But tbh, even if you do want each application to bundle its own libraries, there was already a solution for that which has been around forever: static linking. I never understood why we’re now trying to create systems that look like static linking, but using dynamic linking to do it.

      I think it’s convenient for developers to be able to know or control what gets shipped to users, but I think the freedom of users to decide what they will run on their own system is much more important.

      I think the idea that it’s not practical for different software to share the same libraries is overblown. Most common libraries are generally very good about maintaining backwards compatibility within a major version, and different major versions can be installed side-by-side. I run gentoo on my machines, and with the configurability the package manager exposes, I’d wager that no two gentoo installations are alike, either in version of packages installed, or in the options those packages are built with. And for a lot of software that tries to vendor its own copies of libraries, gentoo packages often give the option of forcing them to use the system copy of the library instead. And you know what? It’s actually works almost all the time. If gentoo can make it work across the massive variability of their installs, a distribution which offers less configurability should have virtually no problem.

      You are right that some applications are a pain to package, and that the traditional distribution model does have some duplication of effort. But I don’t think it’s as bad as it’s made out to be. Distributions push a lot of patches upstream, where other distributions will get that work for free. And even for things that aren’t ready to go upstream, there’s still a lot of sharing across distributions. My system runs musl for its C library, instead of the more common glibc. There aren’t that many musl-based distributions out there, and there’s some software that needs to be patched to work – though much less than used to be the case, thanks to the work of the distributions. But it’s pretty common for other musl-based distributions to look at what Alpine or Void have done when packaging software and use it as a starting point.

      In fact, I’d say that the most important role distributions play is when they find and fix bugs and get those fixes upstreamed. Different distributions will be on different versions of libraries at different times, and so will run into different bugs. You could make the argument that by using the software author’s “blessed” version of each library, everybody can have a consistent experience with the software. I would argue that this means that bugs will be found and fixed more slowly. For example, a rolling release distro that’s packaging libraries on the bleeding edge might find and fix bugs that would eventually get hit in the Flatpak version, but might do so far sooner.

      The one thing I’ve heard about Flatpak/Snap/etc that sounds remotely interesting to me is the sandboxing.

      • donuts@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        You might not care about the extra disk space

        Disk space is cheap and the amount of space consumed by non-game applications is minimal. No matter how many times people say it I still don’t find it to be a compelling argument against Flatpak.

        network bandwidth and install time

        This is a one-time, install-time factor and even as someone with a bandwidth cap has never been a problem for me.

        duplicate copies of all the libraries they depend on

        Again, that’s not always the reality, and Flatpak already has the ability to share platforms when appropriate.

        I’ve been a FOSS user and developer for many years now, and I’ve known real (big) projects that required custom patches to large libraries that were not always suitable to be upstreamed, much to the team’s frustration I should add. As such we had to ship some libraries anyway, and even when distributions could make it work with the standard version of said library, users would end up with edge cases and bug regressions that made their experience worse.

        In that case (and this isn’t even their fault), distro package maintainers are doing extra work to try to re-ship an application that’s already easily available on Flatpak and AppImage, shipped directly by the development team and working as intended.

        I think it’s convenient for developers to be able to know or control what gets shipped to users, but I think the freedom of users to decide what they will run on their own system is much more important.

        My ability to install something as a Flatpak doesn’t take away your ability to install a program another way, be it from source or from a repo.

        And for a lot of software that tries to vendor its own copies of libraries, gentoo packages often give the option of forcing them to use the system copy of the library instead. And you know what? It’s actually works almost all the time. If gentoo can make it work across the massive variability of their installs, a distribution which offers less configurability should have virtually no problem.

        Oh, your program might run with a wrong-but-still-compatible version of whatever library, but I also know for a fact that there are popular applications that will not function correctly without a specific, patched version of a library. (I’m being a bit vague here because I don’t want to derail the conversation or go down any application-specific rabbit holes.)

        In the end, maybe you don’t notice or witness any of the bugs or regressions that have been patched out of a given dependency, but they’ll be there. It’s not quite “undefined behavior” because it’ll probably fail in predictable ways, but it’s certainly “unsupported behavior”.

        You are right that some applications are a pain to package, and that the traditional distribution model does have some duplication of effort. But I don’t think it’s as bad as it’s made out to be. Distributions push a lot of patches upstream, where other distributions will get that work for free.

        Sure, but the package maintainers are always one step behind the ball.

        If you and I create an application and we update the dependencies for that application, it’s up to package maintainers to find that and fix it. That could be as simple as adding another library dependency to a source package (or whatever, I’m not a package maintainer myself), but it could be more complicated if the dependency in question is bespoke or customized in ways that make it slightly more complex to build and ship.

        In contrast, the people who make the program will always have an intimate and prior understanding of what deps they’re using, why they’re being used they way they are, what to test for, and how to build and ship them to users.

        In fact, I’d say that the most important role distributions play is when they find and fix bugs and get those fixes upstreamed. Different distributions will be on different versions of libraries at different times, and so will run into different bugs. You could make the argument that by using the software author’s “blessed” version of each library, everybody can have a consistent experience with the software. I would argue that this means that bugs will be found and fixed more slowly. For example, a rolling release distro that’s packaging libraries on the bleeding edge might find and fix bugs that would eventually get hit in the Flatpak version, but might do so far sooner.

        I’m not so sure about this. For some packages I’m sure that this is true. But go and take a peak into the bug trackers for programs like Godot, Blender, Krita or Gimp, and I’d wager that the majority of bug reports are from people who use the software frequently and not from people who package the software and test it a couple of times before moving on to the next one.