Remix.run Logo
Fundamental flaws of SIMD ISAs (2021)(bitsnbites.eu)
122 points by fanf2 19 hours ago | 92 comments
TinkersW 17 hours ago | parent | next [-]

I write a lot of SIMD and I don't really agree with this..

Flaw1:fixed width

I prefer fixed width as it makes the code simpler to write, size is known as compile time so we know the size of our structures. Swizzle algorithms are also customized based on the size.

Flaw2:pipelining

no CPU I care about is in order so mostly irrelevant, and even scalar instructions are pipelined

Flaw3: tail handling

I code with SIMD as the target, and have special containers that pad memory to SIMD width, no need to mask or run a scalar loop. I copy the last valid value into the remaining slots so it doesn't cause any branch divergence.

PaulHoule 15 hours ago | parent | next [-]

In AVX-512 we have a platform that rewards the assembly language programmer like few platforms have since the 6502. I see people doing really clever things that are specific to the system and one level it is really cool but on another level it means SIMD is the domain of the specialist, Intel puts out press releases about the really great features they have for the national labs and for Facebook whereas the rest of us are 5-10 years behind the curve for SIMD adoption because the juice isn't worth the squeeze.

Just before libraries for training neural nets on GPUs became available I worked on a product that had a SIMD based neural network trainer that was written in hand-coded assembly. We were a generation behind in our AVX instructions so we gave up half of the performance we could have got, but that was the least of the challenges we had to overcome to get the product in front of customers. [1]

My software-centric view of Intel's problems is that they've been spending their customers and shareholders money to put features in chips that are fused off or might as well be fused off because they aren't widely supported in the industry. And that they didn't see this as a problem and neither did their enablers in the computing media and software industry. Just for example, Apple used to ship the MKL libraries which like a turbocharger for matrix math back when they were using Intel chips. For whatever reason, Microsoft did not do this with Windows and neither did most Linux distributions so "the rest of us" are stuck with a fraction of the performance that we paid for.

AMD did the right thing in introducing double pumped AVX-512 because at least assembly language wizards have some place where their code runs and the industry gets closer to the place where we can count on using an instruction set defined 12 years ago.

[1] If I'd been tasked with updating the to next generation I would have written a compiler (if I take that many derivatives by hand I'll get one wrong.) My boss would have ordered me not to, I would have done it anyway and not checked it in.

ack_complete 8 hours ago | parent | next [-]

AVX-512 also has a lot of wonderful facilities for autovectorization, but I suspect its initial downclocking effects plus getting yanked out of Alder Lake killed a lot of the momentum in improving compiler and library usage of it.

Even the Steam Hardware Survey, which is skewed toward upper end hardware, only shows 16% availability of baseline AVX-512, compared to 94% for AVX2.

adgjlsfhk1 8 hours ago | parent [-]

It will be interesting seeing what happens now that AMD is shipping good AVX-512. It really just makes Intel seem incompetent (especially since they're theoretically bringing AVX-512 back in next year anyway)

ack_complete 7 hours ago | parent [-]

No proof, but I suspect that AMD's AVX-512 support played a part in Intel dumping AVX10/256 and changing plans back to shipping a full 512-bit consumer implementation again (we'll see when they actually ship it).

The downside is that AMD also increased the latency of all formerly cheap integer vector ops. This removes one of the main advantages against NEON, which historically has had richer operations but worse latencies. That's one thing I hope Intel doesn't follow.

Also interesting is that Intel's E-core architecture is improving dramatically compared to the P-core, even surpassing it in some cases. For instance, Skymont finally has no penalty for denormals, a long standing Intel weakness. Would not be surprising to see the E-core architecture take over at some point.

adgjlsfhk1 3 hours ago | parent [-]

> For instance, Skymont finally has no penalty for denormals, a long standing Intel weakness.

yeah, that's crazy to me. Intel has been so completely discunctional for the last 15 years. I feel like you couldn't have a clearer sign of "we have 2 completely separate teams that are competing with each other and aren't allowed to/don't want to talk to each other". it's just such a clear sign that the chicken is running around headless

bee_rider 12 hours ago | parent | prev [-]

It is kind of a bummer that MKL isn’t open sourced, as that would make inclusion in Linux easier. It is already free-as-in-beer, but of course that doesn’t solve everything.

Baffling that MS didn’t use it. They have a pretty close relationship…

Agree that they are sort of going after hard-to-use niche features nowadays. But I think it is just that the real thing we want—single threaded performance for branchy code—is, like, incredibly difficult to improve nowadays.

PaulHoule 11 hours ago | parent [-]

At the very least you can decode UTF-8 really quickly with AVX-512

https://lemire.me/blog/2023/08/12/transcoding-utf-8-strings-...

and web browsers at the very least spent a lot of cycles on decoding HTML and Javascript which is UTF-8 encoded. It turns out AVX-512 is good at a lot of things you wouldn't think SIMD would be good at. Intel's got the problem that people don't want to buy new computers because they don't see much benefit from buying a new computer, but a new computer doesn't have the benefit it could have because of lagging software support, and the software support lags because there aren't enough new computers to justify the work to do the software support. Intel deserves blame for a few things, one of which is that they have dragged their feet at getting really innovative features into their products while turning people off with various empty slogans.

They really do have a new instruction set that targets plain ordinary single threaded branchy code

https://www.intel.com/content/www/us/en/developer/articles/t...

they'll probably be out of business before you can use it.

immibis an hour ago | parent | next [-]

If you pay attention this isn't a UTF-8 decoder. It might be some other encoding, or a complete misunderstanding of how UTF-8 works, or an AI hallucination. It also doesn't talk about how to handle the variable number of output bytes or the possibility of a continuation sequence split between input chunks.

gatane 9 hours ago | parent | prev [-]

In the end, it doesnt even matter, javascript frameworks are already big enough to slow down your pc.

Unless if said optimization on parsing runs at the very core of JS.

saagarjha 2 hours ago | parent [-]

It'll speed up first load times.

cjbgkagh 17 hours ago | parent | prev | next [-]

I have similar thoughts,

I don't understand the push for variable width SIMD. Possibly due to ignorance but I think it's an abstraction that can be specialized for different hardware so the similar tradeoffs between low level languages and high level languages apply. Since I already have to be aware of hardware level concepts such as 256bit shuffle not working across 128bit lanes and different instructions having very different performance characteristics on different CPUs I'm already knee deep in hardware specifics. While in general I like abstractions I've largely given up waiting for a 'sufficiently advanced compiler' that would properly auto-vectorize my code. I think AGI AI is more likely to happen sooner. At a guess it seems to be that SIMD code could work on GPUs but GPU code has different memory access costs so the code there would also be completely different.

So my view is either create a much better higher level SIMD abstraction model with a sufficiently advanced compiler that knows all the tricks or let me work closely at the hardware level.

As an outsider who doesn't really know what is going on it does worry me a bit that it appears that WASM is pushing for variable width SIMDs instead of supporting ISAs generally supported by CPUs. I guess it's a portability vs performance tradeoff - I worry that it may be difficult to make variable as performant as fixed width and would prefer to deal with portability by having alternative branches at code level.

  >> Finally, any software that wants to use the new instruction set needs to be rewritten (or at least recompiled). What is worse, software developers often have to target several SIMD generations, and add mechanisms to their programs that dynamically select the optimal code paths depending on which SIMD generation is supported.
Why not marry the two and have variable width SIMD as one of the ISA options and if in the future variable width SIMD become more performant then it would just be another branch to dynamically select.
kevingadd 14 hours ago | parent [-]

Part of the motive behind variable width SIMD in WASM is that there's intentionally-ish no mechanism to do feature detection at runtime in WASM. The whole module has to be valid on your target, you can't include a handful of invalid functions and conditionally execute them if the target supports 256-wide or 512-wide SIMD. If you want to adapt you have to ship entire modules for each set of supported feature flags and select the correct module at startup after probing what the target supports.

So variable width SIMD solves this by making any module using it valid regardless of whether the target supports 512-bit vectors, and the VM 'just' has to solve the problem of generating good code.

Personally I think this is a terrible way to do things and there should have just been a feature detection system, but the horse fled the barn on that one like a decade ago.

__abadams__ 11 hours ago | parent [-]

It would be very easy to support 512-bit vectors everywhere, and just emulate them on most systems with a small number of smaller vectors. It's easy for a compiler to generate good code for this. Clang does it well if you use its built-in vector types (which can be any length). Variable-length vectors, on the other hand, are a very challenging problem for compiler devs. You tend to get worse code out than if you just statically picked a size, even if it's not the native size.

codedokode 3 hours ago | parent | next [-]

Variable length vectors seem to be made for closed-source manually-written assembly (nobody wants to unroll the loop manually and nobody will rewrite it for new register width).

jandrewrogers 11 hours ago | parent | prev [-]

The risk of 512-bit vectors everywhere is that many algorithms will spill the registers pretty badly if implemented in e.g. 128-bit vectors under the hood. In such cases you may be better off with a completely different algorithm implementation.

derf_ 12 hours ago | parent | prev | next [-]

> I code with SIMD as the target, and have special containers that pad memory to SIMD width...

I think this may be domain-specific. I help maintain several open-source audio libraries, and wind up being the one to review the patches when people contribute SIMD for some specific ISA, and I think without exception they always get the tail handling wrong. Due to other interactions it cannot always be avoided by padding. It can roughly double the complexity of the code [0], and requires a disproportionate amount of thinking time vs. the time the code spends running, but if you don't spend that thinking time you can get OOB reads or writes, and thus CVEs. Masked loads/stores are an improvement, but not universally available. I don't have a lot of concrete suggestions.

I also work with a lot of image/video SIMD, and this is just not a problem, because most operations happen on fixed block sizes, and padding buffers is easy and routine.

I agree I would have picked other things for the other two in my own top-3 list.

[0] Here is a fun one, which actually performs worst when len is a multiple of 8 (which it almost always is), and has 59 lines of code for tail handling vs. 33 lines for the main loop: https://gitlab.xiph.org/xiph/opus/-/blob/main/celt/arm/celt_...

codedokode 3 hours ago | parent | next [-]

I think that SIMD code should not be written by hand but rather in a high-level language and so dealing with tail becomes a compiler's and not a programmer's problem. Or people still prefer to write assembly be hand? It seems to be so judging by the code you link.

What I wanted is to write code in a more high-level language like this. For example, to compute a scalar product of a and b you write:

    1..n | a[$1] * b[$1] | sum
Or maybe this:

    x = sum for i in 1 .. n: a[i] * b[i]
And the code gets automatically compiled into SIMD instructions for every existing architecture (and for large arrays, into a multi-thread computation).
ack_complete 6 hours ago | parent | prev | next [-]

It depends on how integrated your SIMD strategy is into the overall technical design. Tail handling is much easier if you can afford SIMD-friendly padding so a full vector load/store is possible even if you have to manually mask. That avoids a lot of the hassle of breaking down memory accesses just to avoid a page fault or setting off the memory checker.

Beyond that -- unit testing. I don't see enough of it for vectorized routines. SIMD widths are small enough that you can usually just test all possible offsets right up against a guard page and brute force verify that no overruns occur.

jandrewrogers 11 hours ago | parent | prev [-]

> Masked loads/stores are an improvement, but not universally available.

Traditionally we’ve worked around this with pretty idiomatic hacks that efficiently implement “masked load” functionality in SIMD ISAs that don’t have them. We could probably be better about not making people write this themselves every time.

aengelke 16 hours ago | parent | prev | next [-]

I agree; and the article seems to have also quite a few technical flaws:

- Register width: we somewhat maxed out at 512 bits, with Intel going back to 256 bits for non-server CPUs. I don't see larger widths on the horizon (even if SVE theoretically supports up to 2048 bits, I don't know any implementation with ~~>256~~ >512 bits). Larger bit widths are not beneficial for most applications and the few applications that are (e.g., some HPC codes) are nowadays served by GPUs.

- The post mentions available opcode space: while opcode space is limited, a reasonably well-designed ISA (e.g., AArch64) has enough holes for extensions. Adding new instructions doesn't require ABI changes, and while adding new registers requires some kernel changes, this is well understood at this point.

- "What is worse, software developers often have to target several SIMD generations" -- no way around this, though, unless auto-vectorization becomes substantially better. Adjusting the register width is not the big problem when porting code, making better use of available instructions is.

- "The packed SIMD paradigm is that there is a 1:1 mapping between the register width and the execution unit width" -- no. E.g., AMD's Zen 4 does double pumping, and AVX was IIRC originally designed to support this as well (although Intel went directly for 256-bit units).

- "At the same time many SIMD operations are pipelined and require several clock cycles to complete" -- well, they are pipelined, but many SIMD instructions have the same latency as their scalar counterpart.

- "Consequently, loops have to be unrolled in order to avoid stalls and keep the pipeline busy." -- loop unroll has several benefits, mostly to reduce the overhead of the loop and to avoid data dependencies between loop iterations. Larger basic blocks are better for hardware as every branch, even if predicted correctly, has a small penalty. "Loop unrolling also increases register pressure" -- it does, but code that really requires >32 registers is extremely rare, so a good instruction scheduler in the compiler can avoid spilling.

In my experience, dynamic vector sizes make code slower, because they inhibit optimizations. E.g., spilling a dynamically sized vector is like a dynamic stack allocation with a dynamic offset. I don't think SVE delivered any large benefits, both in terms of performance (there's not much hardware with SVE to begin with...) and compiler support. RISC-V pushes further into this direction, we'll see how this turns out.

camel-cdr 15 hours ago | parent | next [-]

> we somewhat maxed out at 512 bits

Which still means you have to write your code at least thrice, which is two times more than with a variable length SIMD ISA.

Also there are processors with larger vector length, e.g. 1024-bit: Andes AX45MPV, SiFive X380, 2048-bit: Akeana 1200, 16384-bit: NEC SX-Aurora, Ara, EPI

> no way around this

You rarely need to rewrite SIMD code to take advantage of new extensions, unless somebody decides to create a new one with a larger SIMD width. This mostly happens when very specialized instructions are added.

> In my experience, dynamic vector sizes make code slower, because they inhibit optimizations.

Do you have more examples of this?

I don't see spilling as much of a problem, because you want to avoid it regardless, and codegen for dynamic vector sizes is pretty good in my experience.

> I don't think SVE delivered any large benefits

Well, all Arm CPUs except for the A64FX were build to execute NEON as fast as possible. X86 CPUs aren't built to execute MMX or SSE or the latest, even AVX, as fast as possible.

Anyway, I know of one comparison between NEON and SVE: https://solidpixel.github.io/astcenc_meets_sve

> Performance was a lot better than I expected, giving between 14 and 63% uplift. Larger block sizes benefitted the most, as we get higher utilization of the wider vectors and fewer idle lanes.

> I found the scale of the uplift somewhat surprising as Neoverse V1 allows 4-wide NEON issue, or 2-wide SVE issue, so in terms of data-width the two should work out very similar.

aengelke 15 hours ago | parent | next [-]

> Also there are processors with larger vector length

How do these fare in terms of absolute performance? The NEC TSUBASA is not a CPU.

> Do you have more examples of this?

I ported some numeric simulation kernel to the A64Fx some time ago, fixing the vector width gave a 2x improvement. Compilers probably/hopefully have gotten better in the mean time and I haven't redone the experiments since then, but I'd be surprised if this changed drastically. Spilling is sometimes unavoidable, e.g. due to function calls.

> Anyway, I know of one comparison between NEON and SVE: https://solidpixel.github.io/astcenc_meets_sve

I was specifically referring to dynamic vector sizes. This experiment uses sizes fixed at compile-time, from the article:

> For the astcenc implementation of SVE I decided to implement a fixed-width 256-bit implementation, where the vector length is known at compile time.

camel-cdr 14 hours ago | parent [-]

> How do these fare in terms of absolute performance? The NEC TSUBASA is not a CPU.

The NEC is an attached accelerator, but IIRC it can run an OS in host mode. It's hard to tell how the others perform, because most don't have hardware available yet or only they and partner companies have access. It's also hard to compare, because they don't target the desktop market.

> I ported some numeric simulation kernel to the A64Fx some time ago, fixing the vector width gave a 2x improvement.

Oh, wow. Was this autovectorized or handwritten intrinsics/assembly?

Any chance it's of a small enough scope that I could try to recreate it?

> I was specifically referring to dynamic vector sizes.

Ah, sorry, yes you are correct. It still shows that supporting VLA mechanisms in an ISA doesn't mean it's slower for fixed-size usage.

I'm not aware of any proper VLA vs VLS comparisons. I benchmarked a VLA vs VLS mandelbrot implementation once where there was no performance difference, but that's a too simple example.

codedokode 2 hours ago | parent | prev | next [-]

> Which still means you have to write your code at least thrice, which is two times more than with a variable length SIMD ISA.

This is a wrong approach. You should be writing you code in a high-level language like this:

    x = sum i for 1..n: a[i] * b[i]
And let the compiler write the assembly for every existing architecture (including multi-threaded version of a loop).

I don't understand what is the advantage of writing the SIMD code manually. At least have a LLM write it if you don't like my imaginary high-level vector language.

vardump 15 hours ago | parent | prev [-]

> Which still means you have to write your code at least thrice, which is two times more than with a variable length SIMD ISA.

256 and 512 bits are the only reasonable widths. 256 bit AVX2 is what, 13 or 14 years old now.

adgjlsfhk1 15 hours ago | parent [-]

no. Because Intel is full of absolute idiots, Intel atom didn't support AVX 1 until Gracemont. Tremont is missing AVX1, AVX2, FMA, and basically the rest of X86v3, and shipped in CPUs as recently as 2021 (Jasper Lake).

ack_complete 8 hours ago | parent | next [-]

Intel also shipped a bunch of Pentium-branded CPUs that have AVX disabled, leading to oddities like a Kaby Lake based CPU that doesn't have AVX, and even worse, also shipped a few CPUs that have AVX2 but not BMI2:

https://sourceware.org/bugzilla/show_bug.cgi?id=29611

https://developercommunity.visualstudio.com/t/Crash-in-Windo...

vardump 14 hours ago | parent | prev [-]

Oh damn. I've dropped SSE ages ago and no one complained. I guess the customer base didn't use those chips...

bjourne 9 hours ago | parent | prev | next [-]

> "Loop unrolling also increases register pressure" -- it does, but code that really requires >32 registers is extremely rare, so a good instruction scheduler in the compiler can avoid spilling.

No, it actually is super common in hpc code. If you unroll a loop N times you need N times as many registers. For normal memory-bound code I agree with you, but most hpc kernels will exploit as much of the register file as they can for blocking/tiling.

xphos 7 hours ago | parent | prev | next [-]

I think the variable length stuff does solve encoding issues, and RISCV takes so big strides with the ideas around chaining and vl/lmul/vtype registers.

I think they would benefit from having 4 vtype registers, though. It's wasted scalar space, but how often do you actually rotate between 4 different vector types in main loop bodies. The answer is pretty rarely. And you'd greatly reduce the swapping between vtypes when. I think they needed to find 1 more bit but it's tough the encoding space isn't that large for rvv which is a perk for sure

Can't wait to seem more implementions of rvv to actually test some of it's ideas

dzaima 6 hours ago | parent [-]

If you had two extra bits in the instruction encoding, I think it'd make much more sense to encode element width directly in instructions, leaving LMUL multiplier & agnosticness settings in vsetvl; only things that'd suffer then would be if you need tail-undisturbed for one instr (don't think that's particularly common) and fancy things that reinterpret the vector between different element widths (very uncommon).

Will be interesting to see if longer encodings for RVV with encoded vtype or whatever ever materialize.

deaddodo 15 hours ago | parent | prev | next [-]

> - Register width: we somewhat maxed out at 512 bits, with Intel going back to 256 bits for non-server CPUs. I don't see larger widths on the horizon (even if SVE theoretically supports up to 2048 bits, I don't know any implementation with >256 bits). Larger bit widths are not beneficial for most applications and the few applications that are (e.g., some HPC codes) are nowadays served by GPUs.

Just to address this, it's pretty evident why scalar values have stabilized at 64-bit and vectors at ~512 (though there are larger implementations). Tell someone they only have 256 values to work with and they immediately see the limit, it's why old 8-bit code wasted so much time shuffling carries to compute larger values. Tell them you have 65536 values and it alleviates a large set of that problem, but you're still going to hit limits frequently. Now you have up to 4294967296 values and the limits are realistically only going to be hit in computational realms, so bump it up to 18446744073709551615. Now even most commodity computational limits are alleviated and the compiler will handle the data shuffling for larger ones.

There was naturally going to be a point where there was enough static computational power on integers that it didn't make sense to continue widening them (at least, not at the previous rate). The same goes for vectorization, but in even more niche and specific fields.

cherryteastain 15 hours ago | parent | prev [-]

Fujitsu A64FX used in the Fugaku supercomputer uses SVE with 512 bit width

aengelke 15 hours ago | parent [-]

Thanks, I misremembered. However, the microarchitecture is a bit "weird" (really HPC-targeted), with very long latencies (e.g., ADD (vector) 4 cycles, FADD (vector) 9 cycles). I remember that it was much slower than older x86 CPUs for non-SIMD code, and even for SIMD code, it took quite a bit of effort to get reasonable performance through instruction-level parallelism due to the long latencies and the very limited out-of-order capacities (in particular the just 2x20 reservation station entries for FP).

camel-cdr 16 hours ago | parent | prev | next [-]

> I prefer fixed width

Do you have examples for problems that are easier to solve in fixed-width SIMD?

I maintain that most problems can be solved in a vector-length-agnostic manner. Even if it's slightly more tricky, it's certainly easier than restructuring all of your memory allocations to add padding and implementing three versions for all the differently sized SIMD extensions your target may support. And you can always fall back to using a variable-width SIMD ISA in a fixed-width way, when necessary.

jcranmer 15 hours ago | parent | next [-]

There's a category of autovectorization known as Superword-Level Parallelism (SLP) which effectively scavenges an entire basic block for individual instruction sequences that might be squeezed together into a SIMD instruction. This kind of vectorization doesn't work well with vector-length-agnostic ISAs, because you generally can't scavenge more than a few elements anyways, and inducing any sort of dynamic vector length is more likely to slow your code down as a result (since you can't do constant folding).

There's other kinds of interesting things you can do with vectors that aren't improved by dynamic-length vectors. Something like abseil's hash table, which uses vector code to efficiently manage the occupancy bitmap. Dynamic vector length doesn't help that much in that case, particularly because the vector length you can parallelize over is itself intrinsically low (if you're scanning dozens of elements to find an empty slot, something is wrong). Vector swizzling is harder to do dynamically, and in general, at high vector factors, difficult to do generically in hardware, which means going to larger vectors (even before considering dynamic sizes), vectorization is trickier if you have to do a lot of swizzling.

In general, vector-length-agnostic is really only good for SIMT-like codes, which you can express the vector body as more or less independent f(index) for some knowable-before-you-execute-the-loop range of indices. Stuff like DAXPY or BLAS in general. Move away from this model, and that agnosticism becomes overhead that doesn't pay for itself. (Now granted, this kind of model is a large fraction of parallelizable code, but it's far from all of it).

camel-cdr 15 hours ago | parent | next [-]

The SLP vectorizer is a good point, but I think it's, in comparison with x86, more a problem of the float and vector register files not being shared (in SVE and RVV). You don't need to reconfigure the vector length; just use it at the full width.

> Something like abseil's hash table

If I remember this correctly, the abseil lookup does scale with vector length, as long as you use the native data path width. (albeit with small gains) There is a problem with vector length agnostic handling of abseil, which is the iterator API. With a different API, or compilers that could eliminate redundant predicated load/stores, this would be easier.

> good for SIMT-like codes

Certainly, but I've also seen/written a lot of vector length agnostic code using shuffles, which don't fit into the SIMT paradigm, which means that the scope is larger than just SIMT.

---

As a general comparison, take AVX10/128, AVX10/256 and AVX10/512, overlap their instruction encodings, remove the few instructions that don't make sense anymore, and add a cheap instruction to query the vector length. (probably also instructions like vid and viota, for easier shuffle synthesization) Now you have a variable-length SIMD ISA that feels familiar.

The above is basically what SVE is.

jonstewart 11 hours ago | parent | prev [-]

A number of the cool string processing SIMD techniques depend a _lot_ on register widths and instruction performance characteristics. There’s a fair argument to be made that x64 could be made more consistent/legible for these use cases, but this isn’t matmul—whether you have 128, 256, or 512 bits matters hugely and you may want entirely different algorithms that are contingent on this.

jandrewrogers 15 hours ago | parent | prev | next [-]

I also prefer fixed width. At least in C++, all of the padding, alignment, etc is automagically codegen-ed for the register type in my use cases, so the overhead is approximately zero. All the complexity and cost is in specializing for the capabilities of the underlying SIMD ISA, not the width.

The benefit of fixed width is that optimal data structure and algorithm design on various microarchitectures is dependent on explicitly knowing the register width. SIMD widths aren’t not perfectly substitutable in practice, there is more at play than stride size. You can also do things like explicitly combine separate logic streams in a single SIMD instruction based on knowing the word layout. Compilers don’t do this work in 2025.

The argument for vector width agnostic code seems predicated on the proverbial “sufficiently advanced compiler”. I will likely retire from the industry before such a compiler actually exists. Like fusion power, it has been ten years away my entire life.

camel-cdr 14 hours ago | parent [-]

> The argument for vector width agnostic code is seems predicated on the proverbial “sufficiently advanced compiler”.

A SIMD ISA having a fixed size or not is orthogonal to autovectorization. E.g. I've seen a bunch of cases where things get autovectorized for RVV but not for AVX512. The reason isn't fixed vs variable, but rather the supported instructions themselves.

There are two things I'd like from a "sufficiently advanced compiler”, which are sizeless struct support and redundant predicated load/store elimination. Those don't fundamentally add new capabilities, but makes working with/integrating into existing APIs easier.

> All the complexity and cost is in specializing for the capabilities of the underlying SIMD ISA, not the width.

Wow, it almost sounds like you could take basically the same code and run it with different vector lengths.

> The benefit of fixed width is that optimal data structure and algorithm design on various microarchitectures is dependent on explicitly knowing the register width

Optimal to what degree? Like sure, fixed-width SIMD can always turn your pointer increments from a register add to an immediate add, so it's always more "optimal", but that sort of thing doesn't matter.

The only difference you usually encounter when writing variable instead of fixed size code is that you have to synthesize your shuffles outside the loop. This usually just takes a few instructions, but loading a constant is certainly easier.

jandrewrogers 10 hours ago | parent [-]

The interplay of SIMD width and microarchitecture is more important for performance engineering than you seem to be assuming. Those codegen decisions are made at layer above anything being talked about here and they operate on explicit awareness of things like register size.

It isn’t “same instruction but wider or narrower” or anything that can be trivially autovectorized, it is “different algorithm design”. Compilers are not yet rewriting data structures and algorithms based on microarchitecture.

I write a lot of SIMD code, mostly for database engines, little of which is trivial “processing a vector of data types” style code. AVX512 in particular is strong enough of an ISA that it is used in all kinds of contexts that we traditionally wouldn’t think of as a good for SIMD. You can build all kinds of neat quasi-scalar idioms with it and people do.

synthos 2 hours ago | parent [-]

> Compilers are not yet rewriting data structures and algorithms based on microarchitecture.

Afaik mojo allows for this with the autotuning capability and metaprogramming

pkhuong 15 hours ago | parent | prev [-]

> Do you have examples for problems that are easier to solve in fixed-width SIMD?

Regular expression matching and encryption come to mind.

camel-cdr 14 hours ago | parent [-]

> Regular expression matching

That's probably true. Last time I looked at it, it seemed like parts of vectorscan could be vectorized VLA, but from my, very limited, understanding of the main matching algorithm, it does seem to require specialization on vector length.

It should be possible to do VLA in some capacity, but it would probably be slower and it's too much work to test.

> encryption

From the things I've looked at, it's mixed.

E.g. chacha20 and poly1305 vectorize well in a VLA scheme: https://camel-cdr.github.io/rvv-bench-results/bpi_f3/chacha2..., https://camel-cdr.github.io/rvv-bench-results/bpi_f3/poly130...

Keccak on the other hand was optimized for fast execution on scalar ISAs with 32 GPRs. This is hard to vectorize in general, because GPR "moves" are free and liberally applied.

Another example where it's probably worth specializing is quicksort, specifically the leaf part.

I've written a VLA version, which uses bitonic sort to sort within vector registers. I wasn't able to meaningfully compare it against a fixed size implementation, because vqsort was super slow when I tried to compile it for RVV.

tonetegeatinst 15 hours ago | parent | prev [-]

AFAIK about every modern CPU uses out of order von Neumann architecture. The only people who don't are the handful of researchers and people who work with the government research into non van Neumann designed systems.

luyu_wu 13 hours ago | parent [-]

Low power RISC cores (both ARM and RISC-V) are typically in-order actually!

But any core I can think of as 'high-performance' is OOO.

whaleofatw2022 13 hours ago | parent [-]

MIPS as well as Alpha AFAIR. And technically itanium, otoh It seems to me a bit like a niche for any performance advantages...

pezezin 5 hours ago | parent | next [-]

I would not call neither MIPS, Alpha nor Itanium "high-performance" in 2025...

mattst88 9 hours ago | parent | prev [-]

Alpha 21264 is out-of-order.

codedokode 3 hours ago | parent | prev | next [-]

I think that packed SIMD is better in almost every aspect and Vector SIMD is worse.

With vector SIMD you don't know the register size beforehand and therefore have to maintain and increment counters, adding extra unnecessary instructions, reducing total performance. With packed SIMD you can issue several loads immediately without dependencies, and if you look at code examples, you can see that the x86 code is more dense and uses a sequence of unrolled SIMD instructions without any extra instructions which is more efficient. While RISC-V has 4 SIMD instructions and 4 instructions dealing with counters per loop iteration, i.e. it wastes 50% of command issue bandwidth and you cannot load next block until you increment the counter.

The article mentions that you have to recompile packed SIMD code when a new architecture comes out. Is that really a problem? Open source software is recompiled every week anyway. You should just describe your operations in a high level language that gets compiled to assembly for all supported architectures.

So as a conclusion, it seems that Vector SIMD is optimized for manually-written assembly and closed-source software while Packed SIMD is made for open-source software and compilers and is more efficient. Why RISC-V community prefers Vector architecture, I don't understand.

LoganDark 2 hours ago | parent [-]

This comment sort of reminds me of how Transmeta CPUs relied on the compiler to precompute everything like pipelining. It wasn't done by the hardware.

codedokode 2 hours ago | parent [-]

Makes sense - writing or updating software is easier that designing or updating hardware. To illustrate: anyone can write software but not everyone has access to chip manufacturing fabs.

xphos 7 hours ago | parent | prev | next [-]

Personally, I think load and increment address register in a single instruction is extremely valuable here. It's not quite the risc model but I think that it is actually pretty significant in avoiding a von nurmon bottleneck with simd (the irony in this statement)

I found that a lot of the custom simd cores I've written for simply cannot issue instructions fast enough risvc. Or when they it's in quick bursts and than increments and loop controls that leave the engine idling for more than you'd like.

Better dual issue helps but when you have seperate vector queue you are sending things to it's not that much to add increments into vloads and vstores

codedokode 2 hours ago | parent [-]

> load and increment address register in a single instruction is extremely valuable here

I think this is not important anymore because modern architectures allow to add offset to register value so you can write something like, using weird RISC-V syntax for addition:

    ld r2, 0(r1)
    ld r3, 4(r1)
    ld r4, 8(r1)
These operations can be executed in parallel, while with auto-incrementing you cannot do that.
sweetjuly 16 hours ago | parent | prev | next [-]

Loop unrolling isn't really done because of pipelining but rather to amortize the cost of looping. Any modern out-of-order core will (on the happy path) schedule the operations identically whether you did one copy per loop or four. The only difference is the number of branches.

Remnant44 14 hours ago | parent | next [-]

These days, I strongly believe that loop unrolling is a pessimization, especially with SIMD code.

Scalar code should be unrolled by the compiler to the SIMD word width to expose potential parallelism. But other than that, correctly predicted branches are free, and so is loop instruction overhead on modern wide-dispatch processors. For example, even running a maximally efficient AVX512 kernel on a zen5 machine that dispatches 4 EUs and some load/stores and calculates 2048 bits in the vector units every cycle, you still have a ton of dispatch capacity to handle the loop overhead in the scalar units.

The cost of unrolling is decreased code density and reduced effectiveness of the instruction / uOp cache. I wish Clang in particular would stop unrolling the dang vector loops.

bobmcnamara 10 hours ago | parent | next [-]

> The cost of unrolling is decreased code density and reduced effectiveness of the instruction / uOp cache.

There are some cases where useful code density goes up.

Ex: unroll the Goertzel algorithm by a even number, and suddenly the entire delay line overhead evaporates.

adgjlsfhk1 13 hours ago | parent | prev | next [-]

The part that's really weird is that on modern CPUs predicted branches are free iff they're sufficiently rare (<1 out of 8 instructions or so). but if you have too many, you will be bottlenecked on the branch since you aren't allowed to speculate past a 2nd (3rd on zen5 without hyperthreading?) branch.

dzaima 13 hours ago | parent [-]

The limiting thing isn't necessarily speculating, but more just the number of branches per cycle, i.e. number of non-contiguous locations the processor has to query from L1 / uop cache (and which the branch predictor has to determine the location of). You get that limit with unconditional branches too.

dzaima 13 hours ago | parent | prev [-]

Intel still shares ports between vector and scalar on P-cores; a scalar multiply in the loop will definitely fight with a vector port, and the bits of pointer bumps and branch and whatnot can fill up the 1 or 2 scalar-only ports. And maybe there are some minor power savings from wasting resources on the scalar overhead. Still, clang does unroll way too much.

Remnant44 12 hours ago | parent [-]

My understanding is that they've changed this for Lion Cove and all future P cores, moving to much more of a Zen-like setup with seperate schedulers and ports for vector and scalar ops.

dzaima 11 hours ago | parent [-]

Oh, true, mistook it for an E-core while clicking through diagrams due to the port spam.. Still, that's a 2024 microarchirecture, it'll be like a decade before it's reasonable to ignore older ones.

codedokode 2 hours ago | parent | prev [-]

> Any modern out-of-order core will (on the happy path) schedule the operations identically whether you did one copy per loop or four.

I cannot agree because in an unrolled loop you have less counter increment instructions.

pornel 17 hours ago | parent | prev | next [-]

There are alternative universes where these wouldn't be a problem.

For example, if we didn't settle on executing compiled machine code exactly as-is, and had a instruction-updating pass (less involved than a full VM byte code compilation), then we could adjust SIMD width for existing binaries instead of waiting decades for a new baseline or multiversioning faff.

Another interesting alternative is SIMT. Instead of having a handful of special-case instructions combined with heavyweight software-switched threads, we could have had every instruction SIMDified. It requires structuring programs differently, but getting max performance out of current CPUs already requires SIMD + multicore + predictable branching, so we're doing it anyway, just in a roundabout way.

aengelke 15 hours ago | parent | next [-]

> if we didn't settle on executing compiled machine code exactly as-is, and had a instruction-updating pass (less involved than a full VM byte code compilation)

Apple tried something like this: they collected the LLVM bitcode of apps so that they could recompile and even port to a different architecture. To my knowledge, this was done exactly once (watchOS armv7->AArch64) and deprecated afterwards. Retargeting at this level is inherently difficult (different ABIs, target-specific instructions, intrinsics, etc.). For the same target with a larger feature set, the problems are smaller, but so are the gains -- better SIMD usage would only come from the auto-vectorizer and a better instruction selector that uses different instructions. The expectable gains, however, are low for typical applications and for math-heavy programs, using optimized libraries or simply recompiling is easier.

WebAssembly is a higher-level, more portable bytecode, but performance levels are quite a bit behind natively compiled code.

LegionMammal978 16 hours ago | parent | prev | next [-]

> Another interesting alternative is SIMT. Instead of having a handful of special-case instructions combined with heavyweight software-switched threads, we could have had every instruction SIMDified. It requires structuring programs differently, but getting max performance out of current CPUs already requires SIMD + multicore + predictable branching, so we're doing it anyway, just in a roundabout way.

Is that not where we're already going with the GPGPU trend? The big catch with GPU programming is that many useful routines are irreducibly very branchy (or at least, to an extent that removing branches slows them down unacceptably), and every divergent branch throws out a huge chunk of the GPU's performance. So you retain a traditional CPU to run all your branchy code, but you run into memory-bandwidth woes between the CPU and GPU.

It's generally the exception instead of the rule when you have a big block of data elements upfront that can all be handled uniformly with no branching. These usually have to do with graphics, physical simulation, etc., which is why the SIMT model was popularized by GPUs.

winwang 11 hours ago | parent [-]

Fun fact which I'm 50%(?) sure of: a single branch divergence for integer instructions on current nvidia GPUs won't hurt perf, because there are only 16 int32 lanes anyway.

almostgotcaught 5 hours ago | parent | prev [-]

> There are alternative universes where these wouldn't be a problem

Do people that say these things have literally any experience of merit?

> For example, if we didn't settle on executing compiled machine code exactly as-is, and had a instruction-updating pass

You do understand that at the end of the day, hardware is hard (fixed) and software is soft (malleable) right? There will be always be friction at some boundary - it doesn't matter where you hide the rigidity of a literal rock, you eventually reach a point where you cannot reconfigure something that you would like to. And also the parts of that rock that are useful are extremely expensive (so no one is adding instruction-updating pass silicon just because it would be convenient). That's just physics - the rock is very small but fully baked.

> we could have had every instruction SIMDified

Tell me you don't program GPUs without telling me. Not only is SIMT a literal lie today (cf warp level primitives), there is absolutely no reason to SIMDify all instructions (and you better be a wise user of your scalar registers and scalar instructions if you want fast GPU code).

I wish people would just realize there's no grand paradigm shift that's coming that will save them from the difficult work of actually learning how the device works in order to be able to use it efficiently.

pkhuong 18 hours ago | parent | prev | next [-]

There's more to SIMD than BLAS. https://branchfree.org/2024/06/09/a-draft-taxonomy-of-simd-u... .

camel-cdr 16 hours ago | parent [-]

BLAS, specifically gemm, is one of the rare things where you naturally need to specialize on vector register width.

Most problems don't require this: E.g. your basic penalizable math stuff, unicode conversion, base64 de/encode, json parsing, set intersection, quicksort*, bigint, run length encoding, chacha20, ...

And if you run into a problem that benefits from knowing the SIMD width, then just specialize on it. You can totally use variable-length SIMD ISA's in a fixed-length way when required. But most of the time it isn't required, and you have code that easily scales between vector lengths.

*quicksort: most time is spent partitioning, which is vector length agnostic, you can handle the leafs in a vector length agnostic way, but you'll get more efficient code if you specialize (idk how big the impact is, in vreg bitonic sort is quite efficient).

dragontamer 13 hours ago | parent | prev | next [-]

1. Not a problem for GPUs. NVdia and AMD are both 32-wide or 1024-bit wide hard coded. AMD can swap to 64-wide mode for backwards compatibility to GCN. 1024-bit or 2048-bit seems to be the right values. Too wide and you get branch divergence issues, so it doesn't seem to make sense to go bigger.

In contrast, the systems that have flexible widths have never taken off. It's seemingly much harder to design a programming language for a flexible width SIMD.

2. Not a problem for GPUs. It should be noted that kernels allocate custom amounts of registers: one kernel may use 56 registers, while another kernel might use 200 registers. All GPUs will run these two kernels simultaneously (256+ registers per CU or SM is commonly supported, so both 200+56 registers kernels can run together).

3. Not a problem for GPUs or really any SIMD in most cases. Tail handling is O(1) problem in general and not a significant contributor to code length, size, or benchmarks.

Overall utilization issues are certainly a concern. But in my experience this is caused by branching most often. (Branching in GPUs is very inefficient and forces very low utilization).

dzaima 13 hours ago | parent [-]

Tail handling is not significant for loops with tons of iterations, but there are a ton of real-world situations where you might have a loop take only like 5 iterations or something (even at like 100 iterations, with a loop processing 8 elements at a time (i.e. 256-bit vectors, 32-bit elements), that's 12 vectorized iterations plus up to 7 scalar ones, which is still quite significant. At 1000 iterations you could still have the scalar tail be a couple percent; and still doubling the L1/uop-cache space the loop takes).

It's absolutely a significant contributor to code size (..in scenarios where vectorized code in general is a significant contributor to code size, which admittedly is only very-specialized software).

dragontamer 9 hours ago | parent [-]

Note that AVX512 have per-lane execution masks so I'm not fully convinced that tail handling should even be a thing anymore.

If(my lane is beyond the buffer) then (exec flag off, do not store my lane).

Which in practice should be a simple vcompress instruction (AVX512 register) and maybe a move afterwards??? I admit that I'm not an AVX512 expert but it doesn't seem all that difficult with vcompress instructions + execmask.

dzaima 8 hours ago | parent [-]

It takes like 4 instrs to compute the mask from an arbitrary length (AVX-512 doesn't have any instruction for this so you need to do `bzhi(-1, min(left,vl))` and move that to a mask register) so you still would likely want to avoid it in the hot loop.

Doing the tail separately but with masking SIMD is an improvement over a scalar loop perf-wise (..perhaps outside of the case of 1 or 2 elements, which is a realistic situation for a bunch of loops too), but it'll still add a double-digit percentage to code size over just a plain SIMD loop without tail handling.

And this doesn't help pre-AVX-512, and AVX-512 isn't particularly widespread (AVX2 does have masked load/store with 32-/64-bit granularity, but not 8-/16-bit, and the instrs that do exist are rather slow on AMD (e.g. unconditional 12 cycles/instr throughput for masked-storing 8 32-bit elements); SSE has none, and ARM NEON doesn't have any either (and ARM SVE isn't widespread either, incl. not supported on apple silicon))

(don't need vcompress, plain masked load/store instrs do exist in AVX-512 and are sufficient)

dragontamer 8 hours ago | parent [-]

> It takes like 2 instrs to compute the mask from a length (AVX-512 doesn't have any instruction for this so you need to do a bzhi in GPR and move that to a mask register) so you still would likely want to avoid it in the hot loop.

Keep a register with the values IdxAdjustment = [0, 1, 2, 3, 4, 5, 6, 7].

ExecutionMask = (Broadcast(CurIdx) + IdxAdjustment) < Length

Keep looping while(any(vector) < Length), which is as simple as "while(exec_mask != 0)".

I'm not seeing this take up any "extra" instructions at all. You needed the while() loop after all. It costs +1 Vector Register (IdxAdjustment) and a kMask by my count.

> And this doesn't help pre-AVX-512, and AVX-512 isn't particularly widespread

AVX512 is over 10 years old now. And the premier SIMD execution instruction set is CUDA / NVidia, not AVX512.

AVX512 is now available on all AMD CPUs and has been for the last two generations. It is also available on a select number of Intel CPUs. There is also AMD RDNA, Intel Xe ISAs that could be targeted.

> instrs that do exist are rather slow on AMD (e.g. unconditional 12 cycles/instr throughput for masked-storing 8 32-bit elements);

Okay, I can see that possibly being an issue then.

EDIT: AMD Zen5 Optimization Manual states Latency1 and throughput 2-per-clocktick, while Intel's Skylake documentation of https://www.intel.com/content/www/us/en/docs/intrinsics-guid... states Latency5 Throughput 1-per-clock-tick.

AMD Zen5 seems to include support to vmovdqu8 (its in the optimization guide .xlsx sheet with latencies/throughputs, also as 1-latency / 4-throughput). This includes vmovdqu8 (

I'm not sure if the "mask" register changes the instruction. I'll do some research to see if I can verify your claim (I don't have my Zen5 computer built yet... but its soon).

dzaima 7 hours ago | parent [-]

That's two instrs - bumping the indices, and doing the comparison. You still want scalar pointer/index bumping for contiguous loads/stores (using gathers/scatters for those would be stupid and slow), and that gets you the end check for free via fused cmp+jcc.

And those two instrs are vector instrs, i.e. competing with execution units for the actual thing you want to compute, whereas scalar instrs have at least some independent units that allow avoiding desiring infinite unrolling.

And if your loop is processing 32-bit (or, worse, smaller) elements, those indices, if done as 64-bit, as most code will do, will cost even more.

AVX-512 might be 10 years old, but Intel's latest (!) cores still don't support it on hardware with E-cores, so still a decade away from being able to just assume it exists. Another thread on this post mentioned that Intel has shipped hardware without AVX/AVX2/FMA as late as 2021 even.

> Okay, I can see that possibly being an issue then.

To be clear, that's only the AVX2 instrs; AVX-512 masked loads/stores are fast (..yes, even on Zen 4 where the AVX-512 masked loads/stores are fast, the AVX2 ones that do an equivalent amount of work (albeit taking the mask in a different register class) are slow). uops.info: https://uops.info/table.html?search=maskmovd%20m256&cb_lat=o...

Intel also has AVX-512 masked 512-bit 8-bit-elt stores at half the throughput of unmasked for some reason (not 256-bit or ≥16-bit-elt though; presumably culprit being the mask having 64 elts): https://uops.info/table.html?search=movdqu8%20m512&cb_lat=on...

And masked loads use some execution ports on both Intel and AMD, eating away from throughput of the main operation. All in all just not implemented for being able to needlessly use masked loads/stores in hot loops.

dragontamer 7 hours ago | parent [-]

Gotcha. Makes sense. Thanks for the discussion!

Overall, I agree that AVX and Neon have their warts and performance issues. But they're like 15+ years old now and designed well before GPU Compute was possible.

> using gathers/scatters for those would be stupid and slow

This is where CPUs are really bad. GPUs will coalesce gather/scatters thanks to __shared__ memory (with human assistance of course).

But also the simplest of load/store patters are auto-detected and coalesced. So a GPU programmer doesn't have to worry about SIMD lane load/store (called vgather in AVX512) being slower. It's all optimized to hell and back.

Having a full lane-to-lane crossbar and supporting high performance memory access patterns needs to be a priority moving forward.

bob1029 15 hours ago | parent | prev | next [-]

> Since the register size is fixed there is no way to scale the ISA to new levels of hardware parallelism without adding new instructions and registers.

I look at SIMD as the same idea as any other aspect of the x86 instruction set. If you are directly interacting with it, you should probably have a good reason to be.

I primarily interact with these primitives via types like Vector<T> in .NET's System.Numerics namespace. With the appropriate level of abstraction, you no longer have to worry about how wide the underlying architecture is, or if it even supports SIMD at all.

I'd prefer to let someone who is paid a very fat salary by a F100 spend their full time job worrying about how to emit SIMD instructions for my program source.

lauriewired 13 hours ago | parent | prev | next [-]

The three “flaws” that this post lists are exactly what the industry has been moving away from for the last decade.

Arm’s SVE, and RISC-V’s vector extension are all vector-length-agnostic. RISC-V’s implementation is particularly nice, you only have to compile for one code path (unlike avx with the need for fat-binary else/if trees).

freeone3000 17 hours ago | parent | prev | next [-]

x86 SIMD suffers from register aliasing. xmm0 is actually the low-half of ymm0, so you need to explicitly tell the processor what your input type is to properly handle overflow and signing. Actual vectorized instructions don’t have this problem but you also can’t change it now.

dang 7 hours ago | parent | prev | next [-]

Related:

Three Fundamental Flaws of SIMD - https://news.ycombinator.com/item?id=28114934 - Aug 2021 (20 comments)

gitroom 17 hours ago | parent | prev | next [-]

Oh man, totally get the pain with compilers and SIMD tricks - the struggle's so real. Ever feel like keeping low level control is the only way stuff actually runs as smooth as you want, or am I just too stubborn to give abstractions a real shot?

convolvatron 18 hours ago | parent | prev | next [-]

i would certainly add lack of reductions ('horizontal' operations) and a more generalized model of communication to the list.

adgjlsfhk1 14 hours ago | parent [-]

The tricky part with reductions is that they are somewhat inherently slow since they often need to be done pairwise and a pairwise reduction over 16 elements will naturally have pretty limited parallelism.

convolvatron 14 hours ago | parent [-]

kinda? this is sort of a direct result of the 'vectors are just sliced registers' model. if i do a pairwise operation and divide my domain by 2 at each step, is the resulting vector sparse or dense? if its dense then I only really top out when i'm in the last log2slice steps.

sweetjuly 6 hours ago | parent [-]

Yes, but this is not cheap for hardware. CPU designers love SIMD because it lets them just slap down ALUs in parallel and get 32x performance boosts. Reductions, however, are not entirely parallel and instead have a relatively huge gate depth.

For example, suppose an add operation has a latency of one unit in some silicon process. To add reduce a 32 element vector, you'll have a five deep tree, which means your operation has a latency of five units. You can pipeline this, but you can't solve the fact that this operation has a 5x higher latency than the non-reduce operations.

adgjlsfhk1 3 hours ago | parent [-]

for some, you can cheat a little (e.g. partial sum reduction), but then you don't get to share IP with the rest of your ALUs. I do really want to see what an optimal 32 wide reduction and circuit looks like. For integers, you pretty clearly can do much better than pairwise reduction. float sounds tricky

timewizard 14 hours ago | parent | prev [-]

> Another problem is that each new SIMD generation requires new instruction opcodes and encodings.

It requires new opcodes. It does not strictly require new encodings. Several new encodings are legacy compatible and can encode previous generations vector instructions.

> so the architecture must provide enough SIMD registers to avoid register spilling.

Or the architecture allows memory operands. The great joy of basic x86 encoding is that you don't actually need to put things in registers to operate on them.

> Usually you also need extra control logic before the loop. For instance if the array length is less than the SIMD register width, the main SIMD loop should be skipped.

What do you want? No control overhead or the speed enabled by SIMD? This isn't a flaw. This is a necessary price to achieve the efficiency you do in the main loop.

camel-cdr 14 hours ago | parent [-]

> The great joy of basic x86 encoding is that you don't actually need to put things in registers to operate on them.

That's just spilling with fewer steps. The executed uops should be the same.

timewizard 13 hours ago | parent [-]

> That's just spilling with fewer steps.

Another way to say this is it's "more efficient."

> The executed uops should be the same.

And "more densely coded."

camel-cdr 12 hours ago | parent [-]

hm, I was wondering how the density compares with x86 having more complex encodings in general.

vaddps zmm1,zmm0,ZMMWORD PTR [r14]

takes six bytes to encode:

62 d1 7c 48 58 0e

In SVE and RVV a load+add takes 8 bytes to encode.