| ▲ | senfiaj 20 hours ago |
| This might offend some people but even Linus Torvalds thinks that the ABI compatibility is not good enough in Linux distros, and this is one of the main reasons Linux is not popular on the desktop. https://www.youtube.com/watch?v=5PmHRSeA2c8&t=283s |
|
| ▲ | ori_b 19 hours ago | parent | next [-] |
| To quote a friend; "Glibc is a waste of a perfectly good stable kernel ABI" |
| |
| ▲ | derefr 15 hours ago | parent | next [-] | | Kind of funny to realize, the NT kernel ABI isn’t even all that stable itself; it is just wrapped in a set of very stable userland exposures (Win32, UWP, etc.), and it’s those exposures that Windows executables are relying on. A theoretical Windows PE binary that was 100% statically linked (and so directly contained NT syscalls) wouldn’t be at-all portable between different Windows versions. Linux with glibc is the complete opposite; there really does exist old Linux software that static-links in everything down to libc, just interacting with the kernel through syscalls—and it does (almost always) still work to run such software on a modern Linux, even when the software is 10-20 years old. I guess this is why Linux containers are such a thing: you’re taking a dynamically-linked Linux binary and pinning it to a particular entire userland, such that when you run the old software, it calls into the old glibc. Containers work, because they ultimately ground out in the same set of stable kernel ABI calls. (Which, now that I think of it, makes me wonder how exactly Windows containers work. I’m guessing each one brings its own NTOSKRNL, that gets spun up under HyperV if the host kernel ABI doesn’t match the guest?) | | |
| ▲ | easton 13 hours ago | parent | next [-] | | IIRC, Windows containers require that the container be built with a base image that matches the host for it to work at all (like, the exact build of Windows has to match). Guessing that’s how they get a ‘stable ABI’. …actually, looks like it’s a bit looser these days. Version matrix incoming: https://learn.microsoft.com/en-us/virtualization/windowscont... | | |
| ▲ | my123 12 hours ago | parent [-] | | The ABI was stabilised for backwards compatibility since Windows Server 2022, but is not stable for earlier releases. |
| |
| ▲ | senfiaj 15 hours ago | parent | prev | next [-] | | > Kind of funny to realize, the NT kernel ABI isn’t even all that stable itself This is not a big problem if it's hard/unlikely enough to write a code that accidentally relies on raw syscalls. At least MS's dev tooling doesn't provide an easy way to bypass the standard DLLs. > makes me wonder how exactly Windows containers work I guess containers do the syscalls through the standard Windows DLLs like any regular userspace application. If it's a Linux container on Windows, probably the WSL syscalls, which I guess, are stable. | |
| ▲ | sedatk 12 hours ago | parent | prev | next [-] | | > NT kernel ABI isn’t even all that stable itself Can you give an example where a breaking change was introduced in NT kernel ABI? | | |
| ▲ | andrewf 9 hours ago | parent | next [-] | | https://j00ru.vexillium.org/syscalls/nt/64/ (One example: hit "Show" on the table header for Win11, then use the form at the top of the page to highlight syscall 8c) | | |
| ▲ | sedatk 8 hours ago | parent [-] | | Changes in syscall numbers aren't necessarily breaking changes as you're supposed to use ntdll.dll to call kernel, not direct syscalls. | | |
| |
| ▲ | mrpippy 9 hours ago | parent | prev [-] | | The syscall numbers change with every release: https://j00ru.vexillium.org/syscalls/nt/64/ | | |
| ▲ | sedatk 8 hours ago | parent [-] | | Syscall numbers shouldn't be a problem if you link against ntdll.dll. | | |
| ▲ | immibis an hour ago | parent | next [-] | | So now you're talking about the ntdll.dll ABI instead of the kernel ABI. ntdll.dll is not the kernel. | |
| ▲ | MangoToupe 8 hours ago | parent | prev [-] | | ...isn't that the point of this entire subthread? The kernel itself doesn't provide the stable ABI, userland code that the binary links to does. | | |
| ▲ | sedatk 7 hours ago | parent [-] | | No. On NT, kernel ABI isn't defined by the syscalls but NTDLL. Win32 and all other APIs are wrappers on top of NTDLL, not syscalls. Syscalls are how NTDLL implements kernel calls behind the scenes, it's an implementation detail. Original point of the thread was about Win32, UWP and other APIs that build a new layer on top of NTDLL. I argue that NT doesn't break its kernel ABI. | | |
|
|
|
| |
| ▲ | dist-epoch 13 hours ago | parent | prev | next [-] | | Apparently there are 3 kinds of Windows containers, one using HyperV, and the others sharing the kernel (like Linux containers) https://thomasvanlaere.com/posts/2021/06/exploring-windows-c... | |
| ▲ | Zardoz84 15 hours ago | parent | prev [-] | | Docker on windows isn't simply a glorified virtual machine running a Linux. aka Linux subsystem v2 |
| |
| ▲ | microtonal 16 hours ago | parent | prev | next [-] | | At least glibc uses versioned symbols. Hundreds of other widely-used open source libraries don't. | | |
| ▲ | ok123456 15 hours ago | parent | next [-] | | Versioned glibc symbols are part of the reason that binaries aren't portable across Linux distributions and time. | | |
| ▲ | ben-schaaf 14 hours ago | parent [-] | | Only because people aren't putting in the effort to build their binaries properly. You need to link against the oldest glibc version that has all the symbols you need, and then your binary will actually work everywhere(*). * Except for non-glibc distributions of course. | | |
| ▲ | chrismorgan 18 minutes ago | parent | next [-] | | I don’t understand why this is the case, and would like to understand. If I want only functions f1 and f2 which were introduced in glibc versions v1 and v2, why do I have to build with v2 rather than v3? Shouldn’t the symbols be named something like glibc_v1_f1 and glibc_v2_f2 regardless of whether you’re compiling against glibc v2 or glibc v3? If it is instead something like “compiling against vN uses symbols glibc_vN_f1 and glibc_vN_f2” combined with glibc v3 providing glibc_v1_f1, glibc_v2_f1, glibc_v3_f1, glibc_v2_f2 and glbc_v3_f2… why would it be that way? | |
| ▲ | LegionMammal978 9 hours ago | parent | prev | next [-] | | But to link against an old glibc version, you need to compile on an old distro, on a VM. And you'll have a rough time if some part of the build depends on a tool too new for your VM. It would be infinitely simpler if one could simply 'cross-compile' down to older symbol versions, but the tooling does not make this easy at all. | | |
| ▲ | jhasse 2 hours ago | parent | next [-] | | It's actually doable without an old glibc as it was done by the Autopackage project: https://github.com/DeaDBeeF-Player/apbuild That never took off though, containers are easier. Wirh distrobox and other tools this is quite easy, too. | |
| ▲ | nineteen999 3 hours ago | parent | prev [-] | | Huh? Bullshit. You could totally compile and link in a container. | | |
| ▲ | LeFantome 3 hours ago | parent [-] | | Ok, so you agree with him except where he says “in a VM” because you say you can also do it “in a container”. Of course, you both leave out that you could do it “on real hardware”. But none of this matters. The real point is that you have to compile on an old distro. If he left out “in a VM”, you would have had nothing to correct. | | |
| ▲ | nineteen999 3 hours ago | parent [-] | | I'm not disagreeing that glibc symbol versioning could be better. I raised it because this is probably one of the few valid use cases for containers where they would have a large advantage over a heavyweight VM. But it's like complaining that you might need a VM or container to compile your software for Win16 or Win32s. Nobody is using those anymore. Nor really old Linux distributions. And if they do, they're not really going to complain about having to use a VM or container. As C/C++ programmer, the thing I notice is ... the people who complain about this most loudly are the web dev crowd who don't speak C/C++, when some ancient game doesn't work on their obscure Arch/Gentoo/Ubuntu distribution and they don't know how to fix it. Boo hoo. But they'll happily take a paycheck for writing a bunch of shit Go/Ruby/PHP code that runs on Linux 24/7 without downtime - not because of the quality of their code, but due to the reliability of the platform at _that_ particular task. Go figure. | | |
| ▲ | Rohansi 43 minutes ago | parent [-] | | > But they'll happily take a paycheck for writing a bunch of shit Go/Ruby/PHP code that runs on Linux 24/7 without downtime - not because of the quality of their code, but due to the reliability of the platform at _that_ particular task. But does the lack of a stable ABI have any (negative) effect on the reliability of the platform? |
|
|
|
| |
| ▲ | TUSF 42 minutes ago | parent | prev | next [-] | | > You need to link against the oldest glibc version that has all the symbols you need Or at least the oldest one made before glibc's latest backwards incompatible ABI break. | |
| ▲ | ok123456 14 hours ago | parent | prev | next [-] | | If it requires effort to be correct, that's a bad design. Why doesn't the glibc use the version tag to do the appropriate mapping? | | |
| ▲ | mikkupikku 12 hours ago | parent [-] | | I think even calling it a "design" is dubious. It's an attribute of these systems that arose out of the circumstance, nobody ever sat down and said it should be this way. Even Torvalds complaining about it doesn't mean it gets fixed, it's not analogous to Steve Jobs complaining about a thing because Torvalds is only in charge of one piece of the puzzle, and the whole image that emerges from all these different groups only loosely collaborating with each other isn't going to be anybody's ideal. In other words, the Linux desktop as a whole is a Bazaar, not Cathedral. |
| |
| ▲ | forrestthewoods 6 hours ago | parent | prev [-] | | > Only because people aren't putting in the effort to build their binaries properly. Because Linux userland is an unmitigated clusterfuck of bad design that makes this really really really hard. GCC/Clang and Glibc make it effectively impossible almost impossible to do this on their own. The only way you can actually do this is: 1. create a userland container from the past
2. use Zig which moved oceans and mountains to make it somewhat tractable It's awful. |
|
| |
| ▲ | grishka 11 hours ago | parent | prev | next [-] | | Yeah and nothing ever lets you pick which versions to link to. You're going to get the latest ones and you better enjoy that. I found it out the hard way recently when I just wanted to do a perfectly normal thing of distributing precompiled binaries for my project. Ended up using whatever "Amazon Linux" is because it uses an old enough glibc but has a new enough gcc. | | | |
| ▲ | afishhh 14 hours ago | parent | prev [-] | | > Hundreds of other widely-used open source libraries don't. Correct me if I'm wrong but I don't think versioned symbols are a thing on Windows (i.e. they are non-portable). This is not a problem for glibc but it is very much a problem for a lot of open source libraries (which instead tend to just provide a stable C ABI if they care). | | |
| ▲ | Const-me 11 hours ago | parent [-] | | > versioned symbols are a thing on Windows There’re quite a few mechanics they use for that. The oldest one, call a special API function on startup like InitCommonControlsEx, and another API functions will DLL resolve differently or behave differently. A similar tactic, require an SDK defined magic number as a parameter to some initialization functions, different magic numbers switching symbols from the same library; examples are WSAStartup and MFStartup. Around Win2k they did side by side assemblies or WinSxS. Include a special XML manifest into embedded resource of your EXE, and you can request specific version of a dependent API DLL. The OS now keeps multiple versions internally. Then there’re compatibility mechanics, both OS builtin and user controllable (right click on EXE or LNK, compatibility tab). The compatibility mode is yet another way to control versions of DLLs used by the application. Pretty sure there’s more and I forgot something. | | |
| ▲ | cesarb 10 hours ago | parent | next [-] | | > There’re quite a few mechanics they use for that. The oldest one, call a special API function on startup [...] Isn't the oldest one... to have the API/ABI version in the name of your DLL? Unlike on Linux which by default uses a flat namespace, on the Windows land imports are nearly always identified by a pair of the DLL name and the symbol name (or ordinal). You can even have multiple C runtimes (MSVCR71.DLL, MSVCR80.DLL, etc) linked together but working independently in the same executable. | |
| ▲ | 9 hours ago | parent | prev [-] | | [deleted] |
|
|
| |
| ▲ | bsimpson 9 hours ago | parent | prev | next [-] | | I only learned about glibc earlier today, when I was trying to figure out why the Nix version of a game crashes on SteamOS unless you unset some environ vars. Turns out that Nix is built against a different version of glibc than SteamOS, and for some reason, that matters. You have to make sure none of Steam's libraries are on the path before the Nix code will run. It seems impractical to expect every piece of software on your computer to be built against a specific version of a specific library, but I guess that's Linux for you. | |
| ▲ | Imustaskforhelp 14 hours ago | parent | prev [-] | | Ask your friend if he would CC0 the quote or similar (not sure if its possible but like) I can imagine this being a quote on t-shirts xD Honestly I might buy a T-shirt with such a quote. I think glibc is such a pain that it is the reason why we have so vastly different package management and I feel like non glibc things really would simplify the package management approach to linux which although feels solved, there are definitely still issues with the approach and I think we should still all definitely as such look for ways to solve the problem | | |
| ▲ | seba_dos1 11 hours ago | parent [-] | | Non-glibc distros (musl, uclibc...) with package managers have been a thing for ages already. | | |
| ▲ | nineteen999 3 hours ago | parent [-] | | And they basically hold under 0.01% of Linux marketshare and are completely shit. |
|
|
|
|
| ▲ | BirAdam 18 hours ago | parent | prev | next [-] |
| AppImage, theoretically, solves this problem (or FlatPak I guess). The issue would really be in getting people to package up dead/abandoned software. |
| |
| ▲ | Imustaskforhelp 16 hours ago | parent | next [-] | | https://zapps.app/ is another interesting thing in the space. AppImage have some issues/restrictions like it cant run on older linux than one it was compiled on, so people compile it on the oldest pc's and a little bit of more quirks AppImage are really good but zapps are good too, I had once tried to do something on top of zapp but shame that zapp went into the route of crypto ipfs or smth and then I don't really see any development of that now but it would be interesting if someone can add the features of zapp perhaps into appimage or pick up the project and build something similar perhaps. | | |
| ▲ | bobajeff 15 hours ago | parent | next [-] | | This is really cool. Looks like it has a way for me to use my own dynamic linker and glibc version *. At some point I've got to try this. I think it would be nice to have some tools to turn an existing programs into a zapps (there many such tools for making AppImages today). * https://github.com/warptools/ldshim | | |
| ▲ | Imustaskforhelp 15 hours ago | parent [-] | | > At some point I've got to try this. I think it would be nice to have some tools to turn an existing programs into a zapps (there many such tools for making AppImages today). Looks like you met the right guy because I have built this tool :) Allow me to show my project, Appseed (https://nanotimestamps.org/appseed): It's a simple fish script which I had (prototyped with Claude) some 8-10 months ago I guess to solve exactly this. I have a youtube video in the website and the repository is open source on github too. So this actually worked fantastic for a lot of different binaries that I tested it on and I had uploaded it on hackernews as well but nobody really responded, perhaps this might change it :p Now what appseed does is that you can think of it is that it can take a binary and convert it into two folders (one is the dynamic library part) and the other is the binary itself So you can then use something like tar to package it up and run it anywhere. I can of course create it into a single elf-64 as well but I wanted to make it more flexible so that we can have more dynamic library like or perhaps caching or just some other ideas and this made things simple for me too Ldshim is really good idea too although I think I am unable to understand it for the time being but I will try to understand it I suppose. I would really appreciate it if you can tell me more about Ldshim! Perhaps take a look at Appseed too and I think that there might be some similarities except I tried to just create a fish script which can just convert any dynamic binary usually into a static one of sorts I just want more people to take ideas like appseed or zapp's and run with it to make linux's ecosystem better man. Because I just prototyped it with LLM's to see if it was possible or not since I don't have much expertise in the area. So I can only imagine what can be possible if people who have expertise do something about it and this was why I shared it originally/created it I guess. Let me know if you are interested in discussing anything about appseed. My memory's a little rusty about how it worked but I would love to talk about it if I can be of any help :p Have a nice new year man! :p | | |
| ▲ | generichuman 11 hours ago | parent [-] | | Can you build GUI programs with this? I'm thinking anything that would depend on GPU drivers. Anything built with SDL, OpenGL, Vulkan, whatever. | | |
| ▲ | Imustaskforhelp 4 hours ago | parent [-] | | No, in my experimentation I tried to convert OBS into static and it had the issue of it's gui not working. I am not exactly sure what's the reason but maybe you can check out another library like sdl etc. that you mention, I haven't tested out SDL,OpenGL etc's support to be honest but I think that maybe it might not work in the current stage or not (not sure), there is definitely a possibility of making it possible tho because CLI applications work just fine (IO and everything) so I am not really sure what caused my obs studio error but perhaps you can try it and then let me know if you need any help/share the results! |
|
|
| |
| ▲ | freedomben 16 hours ago | parent | prev [-] | | Interesting. I've had a hell of a time building AppImages for my apps that work on Fedora 43. I've found bug reports of people with similar challenges, but it's bizarre because I use plenty of AppImages on F43 that work fine. I wonder if this might be a clue |
| |
| ▲ | scrivanodev 17 hours ago | parent | prev | next [-] | | I can only speak for Flatpak, but I found its packaging workflow and restricted runtime terrible to work with. Lots of undocumented/hard to find behaviour and very painful to integrate with existing package managers (e.g. vcpkg). | | |
| ▲ | yjftsjthsd-h 16 hours ago | parent [-] | | Yeah, flatpak has some good ideas, and they're even mostly well executed, but once you start trying to build your own flatpaks or look under the hood there's a lot of "magic". (Examples: Where do runtimes come from? I couldn't find any docs other than a note that says to not worry about it because you should never ever try to make your own, and I couldn't even figure out the git repos that appear to create the official ones. How do you build software? Well, mostly you plug it into the existing buildsystems and hope that works, though I mostly resorted to `buildsystem: simple` and doing it by hand.) For bonus points, I'm pretty sure 1. flatpaks are actually pretty conceptually simple; the whole base is in /usr and the whole app is in /app and that's it, and 2. the whole thing could have been a thin wrapper over docker/podman like x11docker taken in a slightly different direction. | | |
| ▲ | marcthe12 41 minutes ago | parent | next [-] | | Well flatpak was started pre oci. But its core is is just ostree + bwrap. Bwrap does the sandboxing and ostree handles the storage and mount. Now there still a few more stuff but these 2 are the equivalent to docker. Bwrap is also used for steam and some other sandboxing usecases. Ostree is the core of fedora silverblue. Runtimes are special distros in a way, but since the official one are pretty building everything from source so the repos tend to be messy with buildscripts for everything. | |
| ▲ | seba_dos1 10 hours ago | parent | prev | next [-] | | Not sure what you're talking about, Flatpak runtimes are easy to find and contribute to: https://docs.flatpak.org/en/latest/available-runtimes.html I wasn't directly involved, but the company I worked for has created its own set of runtimes too and I haven't heard any excessive complaints on internal chats, so I don't think it's as arcane as you make it sound either. | |
| ▲ | exceptione 14 hours ago | parent | prev [-] | | You can build your own flatpak by wrapping bwrap, because that is what Flatpak does. Flatpak seems to have some "convenience things" like the various *-SDK packages, but I don't know how much convenience that provides. The flatpak ecosystem is problematic in that most packages are granted too much rights by default. |
|
| |
| ▲ | born-jre 4 hours ago | parent | prev [-] | | Appimage maybe, don’t say flatpak Cz wherever I update my arch system flatpak gets broken which I have to fix by updating or reinstalling |
|
|
| ▲ | dralley 19 hours ago | parent | prev | next [-] |
| While true in many respects (still), it's worth pointing out that this take is 12 years old. |
| |
| ▲ | senfiaj 18 hours ago | parent | next [-] | | Maybe it's better now in some distros. Not sure about other distros, but I don't like Ubuntu's Snap package. Snap packages typically start slower, use more RAM, require sudo privileges to install, and run in an isolated environment only on systems with AppArmour. Snap also tends to slow things some at boot and shutdown. People report issues like theming mismatches, permissions/file-access friction. Firefox theming complaints are a common example. It's almost like running a docker container for each application. Flatpaks seem slightly better, but still a bandaid. Just nobody is going to fix the compatibility problems in Linux. | | | |
| ▲ | josephg 15 hours ago | parent | prev [-] | | I think he still considers this to be the case. He was interviewed on Linus tech tips recently. And he bemoaned in passing the terrible application ecosystem on Linux. It makes sense. Every distribution wants to be in charge of what set of libraries are available on their platform. And they all have their own way to manage software. Developing applications on Linux that can be widely used across distributions is way more complex than it needs to be. I can just ship a binary for windows and macOS. For Linux, you need an rpm and a dpkg and so on. I use davinci resolve on Linux. The resolve developers only officially support Rocky Linux because anything else is too hard. I use it in Linux mint anyway. The application has no title bar and recording audio doesn’t work properly. Bleh. |
|
|
| ▲ | kwanbix 20 hours ago | parent | prev | next [-] |
| I agree 100% with Linus. I can run a WinXP exe on Win10 or 11 almost every time, but on Linux I often have to chase down versions that still work with the latest Mint or Ubuntu distros. Stuff that worked before just breaks, especially if the app isn’t in the repo. |
| |
| ▲ | SvenL 18 hours ago | parent | next [-] | | Yes and even the package format thing is a hell of its own. Even on Ubuntu you have multiple package formats and sometimes there are even multiple app stores (a Gnome one and an Ubuntu specific if I remember correctly) | |
| ▲ | Propelloni 15 hours ago | parent | prev | next [-] | | You can also run a WinXP exe on any Linux distribution almost every time. That's the point of project and Linus' quip: The only stable ABI around on MS Windows and Linux is Win32 (BTW, I do not agree with this.) | | |
| ▲ | Negitivefrags 15 hours ago | parent [-] | | I think it's not unlikely that we reach reach a point in a couple of decades where we are all developing win32 apps that most people are running some form of linux. We already have an entire platform like that (steam deck), and it's the best linux development experience around in my opinion. |
| |
| ▲ | kccqzy 19 hours ago | parent | prev [-] | | That’s actually an intentional nudge to make the software packaged by the distro, which usually implies that they are open source. Who needs ABI compatibility when your software is OSS? You only need API compatibility at that point. | | |
| ▲ | rep_lodsb 18 hours ago | parent | next [-] | | So every Linux distribution should compile and distribute packages for every single piece of open source software in existence, both the very newest stuff that was only released last week, and also everything from 30+ years ago, no matter how obscure. Because almost certainly someone out there will want to use it. And they should be able to, because that is the entire point of free software: user freedom. | | |
| ▲ | rixed 7 hours ago | parent | next [-] | | Those users will either check the source code and compile it themself, with all the proper options to match their system; or rely on a software distribution to do it for them. People who are complaining would prefer a world of isolated apps downloaded from signed stores, but Linux was born at an optimistic time when the goal was software that cooperate and form a system, and which distribution does not depend on a central trusted platform. I do not believe that there is any real technical issue discussed here, just drastically different goals. | | |
| ▲ | ogogmad 40 minutes ago | parent [-] | | No. People would prefer the equivalent of double-click `setup.exe`. Were you being serious? |
| |
| ▲ | kwanbix 15 hours ago | parent | prev | next [-] | | I am not an expert on this, but my question is, how does windows manages to achieve it? Why can't Linux do the same? | | |
| ▲ | johnny22 9 hours ago | parent [-] | | because they care about ABI/API stability. | | |
| ▲ | nineteen999 3 hours ago | parent [-] | | And have an ever decreasing market share, in desktop, hypervisor and server space. The API/ABI stability is probably the only thing stemming the customer leakage at all. It's not the be all and end all. | | |
|
| |
| ▲ | kccqzy 14 hours ago | parent | prev | next [-] | | Your tone makes it sound like this is a bad thing. But from a user’s perspective, I do want a distro to package as much software as possible. And it has nothing to do with user freedom. It’s all about being entitled as a user to have the world’s software conveniently packaged. | | |
| ▲ | Rohansi 13 hours ago | parent | next [-] | | Software installed from your package manager is almost certainly provided as a binary already. You could package a .exe file and that should work everywhere WINE is installed. | | |
| ▲ | kccqzy 8 hours ago | parent [-] | | That's not my point. My point is that if executable A depends on library B, and library B does not provide any stable ABI, then the package manager will take care of updating A whenever updating B. Windows has fanatical commitment to ABI stability, so the situation above does not even occur. As a user, all the hard work dealing with ABI breakages on Linux are done by the people managing the software repos, not by the user or by the developer. I'm personally very appreciative of this fact. | | |
| ▲ | Rohansi an hour ago | parent [-] | | Sure, it's better than nothing, but it's certainly not ideal. How much time and energy is being wasted by libraries like that? Wouldn't it be better if library B had a stable ABI or was versioned? Is there any reason it needs to work like this? |
|
| |
| ▲ | grishka 11 hours ago | parent | prev [-] | | What if you want to use a newer or older version of just one package without having to update or downgrade the entire goddamn universe? What if you need to use proprietary software? I've had so much trouble with package managers that I'm not even sure they are a good idea to begin with. | | |
| ▲ | Maskawanian 10 hours ago | parent [-] | | I know you are trying to make a point about complexity, but that is literally what NixOS allows for. | | |
|
| |
| ▲ | realusername 17 hours ago | parent | prev [-] | | Not sure if it's the right solution but it's a description of what happens right now in practice yes. | | |
| ▲ | bruce511 16 hours ago | parent [-] | | It also makes support more or less impossible. Even if we ship as source, even if the user has the skills to build it, even if the make file supports every version of the kernel, plus all other material variety, plus who knows how many dependencies, what exactly am I supposed to do when a user reports; "I followed your instructions and it doesn't run". Linux Desktop fails because it's not 1 thing, it's 100 things. And to get anything to run reliably on 95 of them you need to be extremely competent. Distribution as source fails because there are too many unknown, and dependent parts. Distribution as binary containers (Docker et al) are popular because it gives the app a fighting chance. While at the same time being a really ugly hack. | | |
| ▲ | tuna74 4 hours ago | parent | next [-] | | Then you only support 1 distro. If anyone wants to use your software on an unsupported distro they can figure out the rest themselves. | |
| ▲ | josephg 15 hours ago | parent | prev [-] | | Yep. But docker doesn’t help you with desktop apps. And everything becomes so big! I think Rob pike has the right idea with go just statically link everything wherever possible. These days I try to do the same, because so much less can go wrong for users. People don’t seem to mind downloading a 30mb executable, so long as it actually works. |
|
|
| |
| ▲ | johncolanduoni 16 hours ago | parent | prev | next [-] | | Even open-source software has to deal with the moving target that is ABI and API compatibility on Linux. OpenSSL’s API versioning is a nightmare, for example, and it’s the most critical piece of software to dynamically link (and almost everything needs a crypto/SSL library). Stable ABIs for certain critical pieces of independently-updatable software (libc, OpenSSL, etc.) is not even that big of a lift or a hard tradeoff. I’ve never run into any issues with macOS’s libc because it doesn’t version the symbol for fopen like glibc does. It just requires commitment and forethought. | |
| ▲ | SkiFire13 15 hours ago | parent | prev [-] | | Everyone is mentioning ABI, but this is really an API problem, so "you only need API compatibility at that point" is a very big understatement. |
|
|
|
| ▲ | RobotToaster 15 hours ago | parent | prev | next [-] |
| Isn't the kernel responsible for the ABI? |
| |
| ▲ | surajrmal 15 hours ago | parent [-] | | ABI is a far larger concept than the kernel UAPI. Remember that the OS includes a lot of things in userspace as well. Many of these things are not even stable between the various contemporary Linux distros, let alone older versions of them. This might include dbus services, fs layout, window manager integration, and all sorts of other things. | | |
| ▲ | tuna74 4 hours ago | parent [-] | | Yeah, it is almost like that complete OS should be called something else than "Linux". |
|
|
|
| ▲ | CorrectHorseBat 13 hours ago | parent | prev | next [-] |
| Android makes a sport of breaking ABI compatibly and it hasn't stopped it from being the most popular mobile OS |
| |
| ▲ | malmz an hour ago | parent | next [-] | | What are you even talking about? Android is by far the most popular mobile OS worldwide. It's only in the US where iPhones are dominant. | | | |
| ▲ | pjmlp 12 hours ago | parent | prev | next [-] | | The reason being JetPack libraries that abstract what Android version is being used. | |
| ▲ | 10 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | izacus 12 hours ago | parent | prev [-] | | That's outright not true though. |
|
|
| ▲ | ogogmad 16 hours ago | parent | prev | next [-] |
| This might be why OpenBSD looks attractive to some. Its kernel and all the different applications are fully integrated with each other -- no distros! It also tries to be simple, I believe, which makes it more secure and overall less buggy. To be honest, I think OSes are boring, and should have been that way since maybe 1995. The basic notions: multi-processing, context switching, tree-like file systems, multiple users, access privileges,
haven't changed since 1970, and the more modern GUI stuff hasn't changed since at least the early '90s. Some design elements, like tree-like file systems, WIMP GUIs, per-user privileges, the fuzziness of what an
"operating system" even is and its role,
are perhaps even arbitrary, but can serve as a mature foundation for better-concieved ideas, such as: ZFS (which implements in a very well-engineered manner a tree-like data storage that's
been standard since the '60s) can serve as a founation for
Postgres (which implements a better-conceived relational design)
I'm wondering why OSS - which according to one of its acolytes, makes all bugs shallow - couldn't make its flagship OS more stable and boring. It's produced an anarchy of packaging systems, breaking upgrades and updates,
unstable glibc, desktop environments that are different and changing seemingly
for the sake of it, sound that's kept breaking, power management iffiness, etc.
|
| |
| ▲ | rep_lodsb 9 hours ago | parent | next [-] | | > tree-like file systems, multiple users, access privileges, Why should everything pretend to be a 1970s minicomputer shared by multiple users connected via teletypes? If there's one good idea in Unix-like systems that should be preserved, IMHO it's independent processes, possibly written in different languages, communicating with each other through file handles. These processes should be isolated from each other, and from access to arbitrary files and devices. But there should be a single privileged process, the "shell" (whether command line, TUI, or GUI), that is responsible for coordinating it all, by launching and passing handles to files/pipes to any other process, under control of the user. Could be done by typing file names, or selecting from a drop-down list, or by drag-and-drop. Other program arguments should be defined in some standard format so that e.g. a text based shell could auto-complete them like in VMS, and a graphical one could build a dialog box from the definition. I don't want to fiddle with permissions or user accounts, ever. It's my computer, and it should do what I tell it to, whether that's opening a text document in my home directory, or writing a disk image to the USB stick I just plugged in. Or even passing full control of some device to a VM running another operating system that has the appropriate drivers installed. But it should all be controlled by the user. Normal programs of course shouldn't be able to open "/dev/sdb", but neither should they be able to open "/home/foo/bar.txt". Outside of the program's own private directory, the only way to access anything should be via handles passed from the launching process, or some other standard protocol. And get rid of "everything is text". For a computer, parsing text is like for a human to read a book over the phone, with an illiterate person on the other end who can only describe the shape of each letter one by one. Every system-level language should support structs, and those are like telepathy in comparison. But no, that's scaaaary, hackers will overflow your buffers to turn your computer into a bomb and blow you to kingdom come! Yeah, not like there's ever been any vulnerability in text parsers, right? Making sure every special shell character is properly escaped is so easy! Sed and awk are the ideal way to manipulate structured data! | | |
| ▲ | bitwize 6 hours ago | parent [-] | | Indeed. AmigaOS was the pinnacle of personal computing OS design. Everything since has been a regression. Fite me. | | |
| |
| ▲ | josephg 15 hours ago | parent | prev | next [-] | | I like FreeBSD for the same reason. The whole system is sane and coherent. Illumos is the same. I wish either of those systems had the same hardware & software support. I’d swap my desktop over in a heartbeat if I could. | |
| ▲ | bitwize 6 hours ago | parent | prev [-] | | OpenBSD—all the BSDs really—have an even more unstable ABI than Linux. The syscall interface, in particular, is subject to change at any time. Statically linked binaries for one Linux version will generally Just Work with any subsequent version; this is not the case for BSD! There's a lot to like about BSD, and many reasons to prefer OpenBSD to Linux, but ABI backward-compatibility is not one of them! One of Linux's main problems is that it's difficult to supply and link versions of library dependencies local to a program. Janky workarounds such as containerization, AppImage, etc. have been developed to combat this. But in the Windows world, applications literally ship, and link against, the libc they were built with (msvcrt, now ucrt I guess). |
|
|
| ▲ | fragmede 14 hours ago | parent | prev | next [-] |
| What's interesting to think about is Conway's law and monorepos and the Linux kernel and userland. If it were all just one big repo, then making breaking changes, wouldn't. The whole ifconfig > ip debacle is an example of where one giant monorepo would have changed how things happened. |
|
| ▲ | duped 20 hours ago | parent | prev [-] |
| It's really just glibc |
| |
| ▲ | qcnguy 19 hours ago | parent | next [-] | | It's really just not. GTK is on its fourth major version. Wayland broke backwards compatibility with tons of apps. | | |
| ▲ | prmoustache 11 hours ago | parent | next [-] | | Multiple versions of GTK or QT can coexist on the same system. GTK2 is still packaged on most distros, I think for example GIMP only switched to GTK3 last year or so. | |
| ▲ | dadoum 17 hours ago | parent | prev | next [-] | | GTK update schedule is very slow, and you can run multiple major versions of GTK on the same computer, it's not the right argument. When people says GTK backwards compatibility is bad, they are referring in particular to its breaking changes between minor versions. It was common for themes and apps to break (or work differently) between minor versions of GTK+ 3, as deprecations were sometimes accompanied with the breaking of the deprecated code. (anyway, before Wayland support became important people stuck to GTK+ 2 which was simple, stable, and still supported at the time; and everyone had it installed on their computer alongside GTK+ 3). Breaking between major versions is annoying (2 to 3, 3 to 4), but for the most part it's renaming work and some slight API modifications, reminiscent of the Python 2 to 3 switch, and it only happened twice since 2000. | |
| ▲ | JoshTriplett 16 hours ago | parent | prev [-] | | The difference is that you can statically link GTK+, and it'll work. You can't statically link glibc, if you want to be able to resolve hostnames or users, because of NSS modules. | | |
| |
| ▲ | amelius 19 hours ago | parent | prev [-] | | Can't we just freeze glibc, at least from an API version perspective? | | |
| ▲ | johncolanduoni 16 hours ago | parent | next [-] | | We definitely can, because almost every other POSIX libc doesn’t have symbol versioning (or MSVC-style multi-version support). It’s not like the behavior of “open” changes radically all the time, and you need to know exactly what source symbol it linked against. It’s really just an artifact of decisions from decades ago, and the cure is way worse than the disease. | |
| ▲ | duped 16 hours ago | parent | prev | next [-] | | The problem is not the APIs, it's symbol versions. You will routinely get loader errors when running software compiled against a newer glibc than what a system provides, even if the caller does not use any "new" APIs. glibc-based toolchains are ultimately missing a GLIBC_MIN_DEPLOYMENT_TARGET definition that gets passed to the linker so it knows which minimum version of glibc your software supports, similar to how Apple's toolchain lets you target older MacOS from a newer toolchain. | | |
| ▲ | jhasse 2 hours ago | parent | next [-] | | That's exactly what apgcc from Autopackage provided (20 years ago). https://github.com/DeaDBeeF-Player/apbuild But compiling in a container is easier and also solves other problems. | |
| ▲ | amelius 15 hours ago | parent | prev | next [-] | | Yes, so that's why freezing the glibc symbol versions would help. If everybody uses the same version, you cannot get conflicts (at least after it has rippled through and everybody is on the same version). The downside is that we can't add anything new to glibc, but I'd say given all the trouble it produces, that's worth accepting. We can still add bugfixes and security fixes to glibc, we just don't change the APIs of the symbols. | | |
| ▲ | uecker 13 hours ago | parent [-] | | It should not be necessary to freeze it. glibc is already extremely backwards compatible. The problem is people distributing programs that request the newest version even though they do not really require it, and this then fails on systems having an older version. At least this is my understanding. The actual practical problem is not glibc but the constant GUI / desktop API changes. |
| |
| ▲ | Y_Y 15 hours ago | parent | prev [-] | | In principle you can patch your binary to accept the old local version, though I don't remember ever getting it to work right. Anyway here it is for the brave or foolhardy, here's the gist: patchelf --set-interpreter /lib/ld-linux-x86-64.so.2 "$APP"
patchelf --set-rpath /lib "$APP"
| | |
| ▲ | follower an hour ago | parent | next [-] | | > [...] brave or foolhardy, [...] Heed the above warning as down this rpath madness surely lies! Exhibit A: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note... Exhibit B: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note... Exhibit C: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note... Oh, sure, rpath/runpath shenanigans will work in some situations but then you'll be tempted to make such shenanigans work in all situations and then the madness will get you... To save everyone a click here are the first two bullet points from Exhibit A: * If an executable has `RPATH` (a.k.a. `DT_RPATH`) set but a shared library that is a (direct or indirect(?)) dependency of that executable has `RUNPATH` (a.k.a. `DT_RUNPATH`) set then the executable's `RPATH` is ignored! * This means a shared library dependency can "force" loading of an incompatible [(for the executable)] dependency version in certain situations. [...] Further nuances regarding LD_LIBRARY_PATH can be found in Exhibit B but I can feel the madness clawing at me again so will stop here. :) | |
| ▲ | btdmaster 11 hours ago | parent | prev [-] | | Yes you can do this, thanks for mentioning I was interested and checked how you would go about it. 1. Delete the shared symbol versioning as per https://stackoverflow.com/a/73388939 (patchelf --clear-symbol-version exp mybinary) 2. Replace libc.so with a fake library that has the right version symbol with a version script
e.g. version.map
GLIBC_2.29 {
global:
*;
}; With an empty fake_libc.c
`gcc -shared -fPIC -Wl,--version-script=version.map,-soname,libc.so.6 -o libc.so.6 fake_libc.c` 3. Hope that you can still point the symbols back to the real libc (either by writing a giant pile of dlsym C code, or some other way, I'm unclear on this part) Ideally glibc would stop checking the version if it's not actually marked as needed by any symbol, not sure why it doesn't (technically it's the same thing normally, so performance?). |
|
| |
| ▲ | boredatoms 19 hours ago | parent | prev [-] | | Or just pre-install all the versions on each distro and pick the right one at load-time |
|
|