Remix.run Logo
phkahler 7 days ago

>> successfully brought x86 to the big.LITTLE core arrangement.

Really? I thought they said using e-cores would be better than hyper threading. AMD has doubled down on hyper threading - putting a second decoder in each core that doesn't directly benefit single thread perf. So Intels 24 cores are now competitive with (actually losing to) 16 zen 5 cores. And that's without using AVX512 which Arrow Lake doesn't even support.

I was never a fan of big.little for desktop or even laptops.

hinkley 6 days ago | parent | next [-]

In nearly every generation of Intel chip where I needed to care about whether hyperthreading was a net positive, it was either proven to be a net reduction in throughput, or a single digit improvement but greatly increased jitter. Even if you manage to get more instructions per cycle with it on, the variability causes grief for systems you have or want telemetry on. I kind of wonder why they keep trying.

I don’t know AMD well enough to say whether it works better for them.

adrian_b 6 days ago | parent [-]

Whether SMT a.k.a. hyperthreading is useful or not depends greatly on the application.

There is one important application for which SMT is almost always beneficial: the compilation of a big software project, where hundreds or thousands of files are compiled concurrently. Depending on the CPU, the project building time without SMT is usually at least 20% greater than with SMT, for some CPUs even up to 30% greater.

For applications that spend much time in executing carefully optimized loops, SMT is usually detrimental. For instance, on a Zen 3 CPU running multithreaded GeekBench 6 with SMT disabled improves the benchmark results by a few percent.

astrange 6 days ago | parent | prev [-]

It's working well for Mac laptops, although I'd rather people call it "asymmetric multiprocessing" than "big.LITTLE". Why is it written like that anyway?

(Wikipedia seems to want me to call it "heterogeneous computing", but that doesn't make sense - surely that term should mean running on CPU+GPU at the same time, or multiple different ISAs.)

Of course, it might've worked fine if they used symmetric CPU cores as well. Hard to tell.

jlokier 6 days ago | parent | next [-]

> Why is it written like that anyway?

Because it's an ARM trademrk, and that's how they want it written:

https://www.arm.com/company/policies/trademarks/arm-trademar...

> (Wikipedia seems to want me to call it "heterogeneous computing", but that doesn't make sense - surely that term should mean running on CPU+GPU at the same time, or multiple different ISAs.)

According to Wikipedia, it means running with different architectures, which doesn't necessarily mean instruction set architectures.

They do actually have different ISAs though. On both Apple Silicon and x86, some vector instructions are only available on the performance cores, so some tasks can only run on the performance cores. The issue is alluded to on Wikipedia:

*> In practice, a big.LITTLE system can be surprisingly inflexible. [...] Another is that the CPUs no longer have equivalent abilities, and matching the right software task to the right CPU becomes more difficult. Most of these problems are being solved by making the electronics and software more flexible.

astrange 6 days ago | parent [-]

> They do actually have different ISAs though. On both Apple Silicon and x86, some vector instructions are only available on the performance cores, so some tasks can only run on the performance cores.

No, there's no difference on Apple Silicon. You'll never need to know which kind of core you're running on. (Except of course that some of them are slower.)

hinkley 6 days ago | parent | prev [-]

I thought big.little was an Arm thing. The little Rockwell chips I have running armbian have them.