Remix.run Logo
Arech 3 days ago

This is super annoying how SW vendors forcefully deprecate good enough hardware.

Genuinely hate that, as Mozilla has deprived me from Firefox's translation feature because of that.

crote 3 days ago | parent | next [-]

The problem is that your "good enough" is someone else's "woefully inadequate", and sticking to the old feature sets is going to make the software horribly inefficient - or just plain unusable.

I'm sure there's someone out there who believe their 8086 is still "good enough", so should we restrict all software to the features supported by an 8086: 16-bit computations only, 1 MB of memory, no multithreading, no SIMD, no floats, no isolation between OS and user processes? That would obviously be ludicrous.

At a certain point it just doesn't make any sense to support hardware that old anymore. When it is cheaper to upgrade than to keep running the old stuff, and only a handful of people are sticking with the ancient hardware for nostalgic reasons, should that tiny group really be holding back basically your entire user base?

Arech 3 days ago | parent [-]

Ah, com'on, spare me from these strawman arguments. Good enought is good enough. If F-Droid wasn't worried about that, you definitely have no reasons to do that for them.

"A tiny group is holding back everyone" is another silly strawman argument - all decent packaging/installation systems support providing different binaries for different architectures. It's just a matter of compiling just another binary and putting it into a package. Nobody is being hold back by anyone, you just can't make a more silly argument than that...

bluGill 3 days ago | parent [-]

But it isn't good enough. SIMD provides measurable improvements to some people's code. To those people what we had before isn't good enough. Sure for the majority SIMD provides no noticeable benefit and so what we had before is good enough, but that isn't everybody.

johnklos 2 days ago | parent [-]

Are you SURE that nobody has figured out how to have code that uses SIMD if you have it, and not use it if you don't?

Your suggestion falls flat on its face when you look at software where performance REALLY matters: ffmpeg. Guess what? It'll use SIMD, but can compile and run just fine without.

I don't understand people who make things up when it comes to telling others why something shouldn't be done. What's it to you?

pabs3 2 days ago | parent | next [-]

It definitely is, you can even do that automatically with SIMDe and runtime function selection.

https://wiki.debian.org/InstructionSelection

wtallis 2 days ago | parent | prev [-]

ffmpeg is a bad example, because it's the kind of project that has lots of infrastructure around incorporating hand-optimized routines with inline assembly or SIMD intrinsics, and runtime detection to dispatch to different optimized code paths. That's not something you can get for free on any C/C++ code base; function multiversioning needs to be explicitly configured per function. By contrast, simply compiling with a newer instruction set permits the compiler's autovectorization use newer instructions whenever and wherever it finds an opportunity.

sparkie 3 days ago | parent | prev | next [-]

OTOH, if software wants to take advantage of modern features, it becomes hell to maintain if you have to have flags for every possible feature supported by CPUID. It's also unreasonable to expect maintainers to package dozens of builds for software that is unlikely to be used.

There's some guidelines[1][2] for developers to follow for a reasonable set of features, where they only need to manage ~4 variants. In this proposal the lowest set of features include SSE4.1, which is basically includes nearly any x86_64 CPU from the past 15 years. In theory we could use a modern CPU to compile the 4 variants and ship them all in a FatELF, so we only need to distribute one set of binaries. This of course would be completely impractical if we had to support every possible CPU's distinct features, and the binaries would be huge.

[1]:https://lists.llvm.org/pipermail/llvm-dev/2020-July/143289.h...

[2]:https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...

Arech 3 days ago | parent [-]

In most cases (and this was the case of Mozilla I referred to) it's only a matter of compiling code that already have all support necessary. They are using some upstream component that works perfectly fine on my architecture. They just decided to drop it, because they could.

sparkie 3 days ago | parent [-]

It's not only your own software, but also its dependencies. The link above is for glibc, and is specifically addressing incompatibliy issues between different software. Unless you are going to compile your own glibc (for example, doing Linux From Scratch), you're going to depend on features shipped by someone else. In this case that means either baseline, with no SIMD support at all, or level A, which includes SSE4.1. It makes no sense for developers to keep maintaining software for 20 year old CPUs when they can't test it.

johnklos 2 days ago | parent | next [-]

> It makes no sense for developers to keep maintaining software for 20 year old CPUs when they can't test it.

This is horribly inaccurate. You can compile software for 20 year old CPUs and run that software on a modern CPU. You can run that software inside of qemu.

FYI, there are plenty of methods of selecting code at run time, too.

If we take what you're saying at face value, then we should give up on portable software, because nobody can possibly test code on all those non-x86 and/or non-modern processors. A bit ridiculous, don't you think?

sparkie 2 days ago | parent [-]

> You can compile software for 20 year old CPUs and run that software on a modern CPU.

That's testing it on the new CPU, not the old one.

> You can run that software inside of qemu.

Sure you can. Go ahead. Why should the maintainer be expected to do that?

> A bit ridiculous, don't you think?

Not at all. It's ridiculous to expect a software developer to give any significance to compatibility with obsolete platforms. I'm not saying we shouldn't try. x86 has good backward compatibility. If it still works, that's good.

But if I implement an algorithm in AVX2, should I also be expected to implement a slower version of the same algorithm using SSE3 so that a 20 year old machine can run my software?

You can always run an old version of the software, and you can always do the work yourself to backport it. It's not my job as a software developer to be concerned about ancient hardware unless someone pays me specifically for that.

Would you expect Microsoft to ship Windows 12 with baseline compatibility? I don't know if it is, but I'm pretty certain that if you tried running it on a 2005 CPU, it would be pretty much non-functional, as performance would be dire. I doubt it is anyway due to UEFI requirements which wouldn't be present on a machine running such CPU.

yjftsjthsd-h 3 days ago | parent | prev [-]

> Unless you are going to compile your own glibc (for example, doing Linux From Scratch),

It's not that hard to use gentoo.

RealStickman_ 2 days ago | parent | prev [-]

The F-Drois builds have been slow for years and with how old their servers apparently are that isn't even surprising in retrospective.