Remix.run Logo
akdev1l 7 hours ago

People always say this to shit on glibc meanwhile those guys bend over backwards to provide strong API compatibilities. It rubs me off the wrong way.

What glibc does not provide is forward compatibility. An application built with glibc 2.12 will not necessarily work with any older version.

Such application could be rebuilt to work with an older glibc as the API is stable. The ABI is not which is why the application would need to be rebuilt.

glibc does not provide ABI compatibility because from their perspective the software should be rebuilt for newer/older versions as needed. Maintaining a stable ABI mostly helps proprietary software where the source is not available for recompilation. Naturally the gnu guys building glibc don’t care about that use case much.

I guess you didn’t mention glibc in your comment but I already typed this out

kelnos 6 hours ago | parent | next [-]

> What glibc does not provide is forward compatibility. An application built with glibc 2.12 will not necessarily work with any older version.

Is this correct? I think you perhaps have it backward? If I compile something against the glibc on my system (Debian testing), it may fail to run on older Debian releases that have older glibc versions. But I don't see why an app built against glibc 2.12 wouldn't run on Debian testing. glibc actually does a good job of using symbol versioning, and IIRC they haven't removed any public functions, so I don't see why this wouldn't work.

More at issue would be the availability of other dependencies. If that old binary compiled against glibc 2.12 was also linked with, say, OpenSSL 0.9.7, I'd have to go out and build a copy of that myself, as Debian no longer provides it, and OpenSSL 3.x is not ABI-compatible.

> glibc does not provide ABI compatibility because from their perspective the software should be rebuilt for newer/older versions as needed.

If true (I don't think it is), that is a hard showstopper for most companies that want to develop for Linux. And I wouldn't blame them.

jdpage 3 hours ago | parent | next [-]

I don't know what the official policy is, but glibc uses versioned symbols and certainly provides enough ABI backward-compatibility that the Python package ecosystem is able to define a "manylinux" target for prebuilt binaries (against an older version of glibc, natch) that continues to work even as glibc is updated.

akdev1l 5 hours ago | parent | prev [-]

Sorry I am not sure if 2.12 is a a recent release or older, I made up this number up

If the application is built against 2.12 it may link against symbols which are versioned 2.12 and may not work against 2.11 - the opposite (building against 2.11 and running on 2.12) will work

>If true (I don't think it is), that is a hard showstopper for most companies that want to develop for Linux.

Not really a show stopper, vendors just do what vendors do and bundle all their dependencies in. Similar to windows when you use anything outside of the win32 API.

The only problem with this approach is that glibc cannot have multiple versions running at once. We have “fixed” this with process namespaces and hence containers/flatpak where you can bundle everything including your own glibc.

Naturally the downside is that each app bundles their own libraries.

em-bee 4 hours ago | parent [-]

The only problem with this approach is that glibc cannot have multiple versions running at once

that's not correct. libraries have versions for a reason. the only thing preventing the installation of multiple glibc versions is the package manager or the package versioning.

this makes building against an older version of glibc non-trivial, because there isn't a ready made package that you can just install. the workarounds take effort:

https://stackoverflow.com/questions/2856438/how-can-i-link-t...

the problem for companies developing on linux is that it is not trivial

akdev1l 3 hours ago | parent | next [-]

glibc must match the linker so you would need a separate linker and the binaries usually have a hardcoded path to the system linker (and you need to binary patch the stuff - https://stackoverflow.com/questions/847179/multiple-glibc-li...)

So in practice you can only have 1 linker, 1 glibc (unless you do chroot or containers and at that point just build your stuff in Ubuntu 12.04 or whatever environment)

seba_dos1 4 hours ago | parent | prev [-]

You compile in a container/chroot with the userspace you target. Done.

In the context of games, that will likely be Steam Runtime.

em-bee 3 hours ago | parent [-]

it's not that simple. you want to be able to use a modern toolchain (compilers that support the latest standards) but build a binary that runs on older systems.

the only way to achieve that is to get the older libraries installed on a newer system, or you could try backporting the new toolchain to the older system. but that's a lot harder.

seba_dos1 2 hours ago | parent [-]

It may be hard-ish, sometimes. Sometimes it's a breeze. And sometimes you can just use host's toolchain with container's sysroot and proceed as if you were cross-compiling. Most of the time it's not a big deal.

Levitating 3 hours ago | parent | prev | next [-]

I personally believe we should just compile games statically. Problem solved, right?

krastanov 6 hours ago | parent | prev | next [-]

I am sorry, I did not mean to imply anyone else is doing something poorly. I believe glibc's (and the rest of the ecosystem of libraries that are probably more limiting) policies and principled stance are quite correct and overall "good for humanity". But as you mentioned, they are inconvenient for a gamer that just wants to run an executable from 10 years ago (for which the source was lost when the game studio was bought).

em-bee 4 hours ago | parent [-]

that 10 year old binary should run, unless it links against a library that no longer exists.

for example here is a 20 year old binary of the game mirrormagic that runs just fine on my modern fedora machine:

    ~/Downloads/mirrormagic-2.0.2> ldd mirrormagic
        linux-gate.so.1 (0xf7f38000)
        libX11.so.6 => /lib/libX11.so.6 (0xf7db5000)
        libm.so.6 => /lib/libm.so.6 (0xf7cd0000)
        libc.so.6 => /lib/libc.so.6 (0xf7ad5000)
        libxcb.so.1 => /lib/libxcb.so.1 (0xf7aa9000)
        /lib/ld-linux.so.2 (0xf7f3b000)
        libXau.so.6 => /lib/libXau.so.6 (0xf7aa4000)
    ~/Downloads/mirrormagic-2.0.2> ls -la mirrormagic
    -rwxr-xr-x. 1 em-bee em-bee 203633 Jun  7  2003 mirrormagic
ok, there are some issues: the sound is not working, and the resolution does not scale. but there are no issues with linked libraries.
charcircuit 7 hours ago | parent | prev [-]

No other operating system works like this. Supporting older versions of an OS or runtime with a compiler toolchain a standard expectation of developers.

akdev1l 7 hours ago | parent | next [-]

Plenty of operating systems work like this. Just not highly commercial ones because proprietary software is the norm on those.

From a bit of research it looks like FreeBSD for example only provides a stable ABI within minor versions and I imagine if you build something for FreeBSD 14 it won’t work on 13.

Stable ABI literally only benefits software where the user doesn’t have the source. Any operating system which assumes you have the source will not prioritize it.

(Edit: actually thinking harder MacOS/iOS is actually much worse on binary compatibility, as for example Intel binaries will stop working entirely due to M-cpu transition - Apple just hits developers with a stick to rebuild their apps)

kelnos 6 hours ago | parent | next [-]

Yes, and this is a great reason why FreeBSD isn't a popular gaming platform, or for proprietary software in general. I'm not saying this is a bad thing, but... that's why.

> Stable ABI literally only benefits software where the user doesn’t have the source.

It also benefits people who don't want to have to do busywork every time the OS updates.

toast0 6 hours ago | parent [-]

FreeBSD isn't too bad, you can build/install compat packages back to FreeBSD 4.x, and I'd expect things to largely work. At previous jobs we would mostly build our software for the oldest FreeBSD version we ran and distribute it to hosts running newer FreeBSD releases and outside some exceptional cases, it would work. But you'd have to either only use base libraries, or be careful about distribution of the libraries you depend on. You can't really use anything from ports, unless you do the same build on oldest and distribute plan.

At Yahoo, we'd build on 4.3-4.8, and run on 4.x - 8.x. At WhatsApp, I think I remember mostly building on 8.x and 9.x, for 8.x - 11.x. The only thing that I remember causing major problems was extending the bitmask for CPU pinning; there were a couple updates where old software + old kernel CPU pinning would work, and old software + new kernel CPU pinning failed; eventually upstream made that better as long as you don't run old software on a system with more cores than fit in the bitmask. I'm sure there were a few other issues, but I don't remember them ...

charcircuit 5 hours ago | parent | prev [-]

You can still run x86 binaries on new macbooks. They don't stop working entirely. Using wine I can even run x86 windows binaries.

akdev1l 5 hours ago | parent [-]

They announced Rosetta 2 will be deprecated and eventually removed (MacOS 28?)

By that point they already hit the developers enough to get them to port to aarch64

(arguably though this could be a special case because it is due to architectural transition)

thescriptkiddie 7 hours ago | parent | prev [-]

what about mac os?

kelnos 6 hours ago | parent | next [-]

macOS doesn't require developers to rebuild apps with each major OS release, as long as they link with system libraries and don't try to (for example) directly make syscalls.

Apple may require rebuilds at some point for their Mac Store (or whatever they call it), but it's not required from a technical perspective.

The one exception here is CPU architecture changes, and even then, Apple has provided seamless emulation/translation layers that they keep around for quite a few years before dropping support.

charcircuit 5 hours ago | parent | prev [-]

The latest Xcode supports targeting back to macOS 11. This covers >99% of macs which is acceptable for most developers.

https://developer.apple.com/support/xcode/