Remix.run Logo
drowsspa 9 hours ago

I find it amazing how much the mess that building C/C++ code has been for so many decades seems to have influenced the direction technology, the economy and even politics has been going.

Really, what would the world look like if this problem had been properly solved? Would the centralization and monetization of the Internet have followed the same path? Would Windows be so dominant? Would social media have evolved to the current status? Would we have had a chance to fight against the technofeudalism we're headed for?

AshamedCaptain 5 hours ago | parent | next [-]

What I find amazing is why people continously claim glibc is the problem here. I have a commercial software binary from 1996 that _still works_ to this day. It even links with X11, and works under Xwayland.

The trick? It's not statically linked, but dynamically linked. And it doesn't like with anything other than glibc, X11 ... and bdb.

At this point I think people just do not know how binary compatibility works at all. Or they refer to a different problem that I am not familiar with.

markus92 5 hours ago | parent | next [-]

We (small HPC system) just upgraded our OS from RHEL 7 to RHEL 9. Most user apps are dynamically linked, too.

You don't want to believe how many old binaries broke. Lot of ABI upgrades like libpng, ncurses, heck even stuff like readline and libtiff all changed just enough for linker errors to occur.

Ironically all the statically compiled stuff was fine. Some small things like you mention only linking to glibc and X11 was fine too. Funnily enough grabbing some old .so files from the RHEL 7 install and dumping them into LD_LIBRARY_PATH also worked better than expected.

But yeah, now that I'm writing this out, glibc was never the problem in terms of forwards compatibility. Now running stuff compiled on modern Ubuntu or RHEL 10 on the older OS, now that's a whole different story...

AshamedCaptain 5 hours ago | parent [-]

> Funnily enough grabbing some old .so files from the RHEL 7 install and dumping them into LD_LIBRARY_PATH also worked better than expected.

Why "better than expected"? I can run the entire userspace from Debian Etch on a kernel built two days ago... some kernel settings need to be changed (because of the old glibc! but it's not glibc's fault: it's the kernel who broke things), but it works.

> Now running stuff compiled on modern Ubuntu or RHEL 10 on the older OS, now that's a whole different story...

But this is a different problem, and no one makes promises here (not the kernel, not musl). So all the talk of statically linking with musl to get such type of compatibility is bullshit (at some point, you're going to hit a syscall/instruction/whatever that the newer musl does that the older kernel/hardware does not support).

marcosdumay 4 hours ago | parent | prev | next [-]

The problem of modern libc (newer than ~2004, I have no idea what that 1996 one is doing) isn't that old software stops working. It's that you can't compile software on your up to date desktop and have it run on your "security updates only" server. Or your clients "couple of years out of date" computers.

And that doesn't require using newer functionality.

AshamedCaptain 4 hours ago | parent [-]

But this is not "backwards compatibility". No one promises this type of "forward compatibility" that you are asking for . Even win32 only does it exceptionally... maybe today you can still build a win10 binary with a win11 toolchain, but you cannot build a win98 binary with it for sure.

And this has nothing to do with 1996, or 2004 glibc at all. In fact, glibc makes this otherwise impossible task actually possible: you can force to link with older symbols, but that solves only a fraction of the problem of what you're trying to achieve. Statically linking / musl does not solve this either. At some point musl is going to use a newer syscall, or any other newer feature, and you're broke again.

Also, what is so hard about building your software in your "security updates only" server? Or a chroot of it at least ? As I was saying below, I have a Debian 2006-ish chroot for this purpose....

marcosdumay 4 hours ago | parent [-]

Windows dlls are forward compatible in that sense. If you use the Linux kernel directly, it is forward compatible in that sense. And, of course, there is no issue at all with statically linked code.

The problem is with the Linux dynamic linking, and the idea that you must not statically link the glibc code. And you can circumvent it by freezing your glibc abstraction interface, so that if you need to add another function, you do so by making another library entirely. But I don't know if musl does that.

AshamedCaptain 3 hours ago | parent [-]

> Windows dlls are forward compatible in that sense.

If you want to go to such level, ELF is also forward compatible in that sense.

This is completely irrelevant, because what the developer is going to see is the binaries he builts in XP SP3 no longer work in XP SP2 because of a link error: the _statically linked_ runtime is going to call symbols that are not in XP SP2 DLLs (e.g. the DecodePointer debacle).

> If you use the Linux kernel directly, it is forward compatible in that sense.

Or not, because there will be a note in the ELF headers with the minimum kernel version required, which is going to be set to a recent version even if you do not use any newer feature. (unless you play with the toolchain) (PE has similar field too, leading to the "not a valid win32 executable" messages).

> And, of course, there is no issue at all with statically linked code.

I would say statically linked code is precisely the root of all these problems.

In addition to bring more problems of its own. E.g. games that dynamically link with SDL can be patched to have any other SDL version, including one with bugfixes for X support, audio, etc. Games that statically link with SDL? Sorry..

> And you can circumvent it by freezing your glibc abstraction interface, so that if you need to add another function, you do so by making another library entirely. But I don't know if musl does that.

Funnily, I think that is exactly the same as the solution I'm proposing for this conundrum: just (dynamically) link with the older glibc ! Voila: your binary now works with glibc from 1996 and glibc from 2026.

Frankly, glibc is already the project with the best binary compatibility of the entire Linux desktop , if not the only one with a binary compatibility story at all . The kernel is _not_ better in this regard (e.g. /dev/dsp).

marcosdumay 3 hours ago | parent [-]

If you use only features available on the older version, for sure, you can compile your software in Win-7 and have it run in Win-2000. Without following any special procedure.

I know, I've done that.

> just (dynamically) link with the older glibc!

Except that the older glibc is unmaintained and very hard to get a hold of and use. If you solve that, yeah, it's the same.

AshamedCaptain 2 hours ago | parent [-]

> If you use only features available on the older version, for sure, you can compile your software in Win-7 and have it run in Win-2000. Without following any special procedure.

No, you can't. When you use 7-era toolchain (e.g. VS 2012) it sets the minimum client version in PE header to Vista, not XP much less 2k.

If you use VC++6 in 7, then yes, you can; but that's not really that different from me using a Debian Etch chroot to build.

Even within XP era this happens, since there are VS versions that target XP _SP2_ and produce binaries that are not compatible with XP _SP1_. That's the "DecodePointer" debacle I was mentioning. _Even_ if you do not use any "SP2" feature (as few as they are), the runtime (the statically linked part; not MSVCRT) is going to call DecodePointer, so even the smallest hello world will catastrophically fail in older win32 version.

Just Google around for hundreds of confused developers.

> Except that the older glibc is unmaintained and very hard to get a hold of and use.

"unmaintained" is another way of saying "frozen" or "security updates only" I guess. But ... hard to get a hold of ? You are literally running it on your the "security updates only" server that you wanted to target in the first place!

CSm1n an hour ago | parent [-]

> No, you can't. When you use 7-era toolchain (e.g. VS 2012) it sets the minimum client version in PE header to Vista, not XP much less 2k.

Yes, you can! There are even multiple Windows 10 era toolchains that officially support XP. VS 2017 was the last release that could build XP binaries.

AshamedCaptain an hour ago | parent [-]

"Without following any special procedure". I know you can install older toolchains and then build using those, but I can do as much on any platform (e.g. by using a chroot). The default on VS2012 is Vista-only binaries.

tonymet 2 hours ago | parent | prev [-]

can you write up a blog of how this is working? because both as a publisher and a user, broken binaries are much more the norm

joshmarinacci 9 hours ago | parent | prev [-]

How does this technical issue affect the economy and politics? In what way would the world be different just because we used a better linker?

m463 3 hours ago | parent | next [-]

Well, you could just look at things from an interoperability and standards viewpoint.

Lots of tech companies and organizations have created artificial barriers to entry.

For example, most people own a computer (their phone) that they cannot control. It will play media under the control of other organizations.

The whole top-to-bottom infrastructure of DRM was put into place by hollywood, and then is used by every other program to control/restrict what people do.

nacozarina 5 hours ago | parent | prev [-]

existential crisis: so hot right now