▲ | jcranmer a day ago | |
There's a category of autovectorization known as Superword-Level Parallelism (SLP) which effectively scavenges an entire basic block for individual instruction sequences that might be squeezed together into a SIMD instruction. This kind of vectorization doesn't work well with vector-length-agnostic ISAs, because you generally can't scavenge more than a few elements anyways, and inducing any sort of dynamic vector length is more likely to slow your code down as a result (since you can't do constant folding). There's other kinds of interesting things you can do with vectors that aren't improved by dynamic-length vectors. Something like abseil's hash table, which uses vector code to efficiently manage the occupancy bitmap. Dynamic vector length doesn't help that much in that case, particularly because the vector length you can parallelize over is itself intrinsically low (if you're scanning dozens of elements to find an empty slot, something is wrong). Vector swizzling is harder to do dynamically, and in general, at high vector factors, difficult to do generically in hardware, which means going to larger vectors (even before considering dynamic sizes), vectorization is trickier if you have to do a lot of swizzling. In general, vector-length-agnostic is really only good for SIMT-like codes, which you can express the vector body as more or less independent f(index) for some knowable-before-you-execute-the-loop range of indices. Stuff like DAXPY or BLAS in general. Move away from this model, and that agnosticism becomes overhead that doesn't pay for itself. (Now granted, this kind of model is a large fraction of parallelizable code, but it's far from all of it). | ||
▲ | camel-cdr a day ago | parent | next [-] | |
The SLP vectorizer is a good point, but I think it's, in comparison with x86, more a problem of the float and vector register files not being shared (in SVE and RVV). You don't need to reconfigure the vector length; just use it at the full width. > Something like abseil's hash table If I remember this correctly, the abseil lookup does scale with vector length, as long as you use the native data path width. (albeit with small gains) There is a problem with vector length agnostic handling of abseil, which is the iterator API. With a different API, or compilers that could eliminate redundant predicated load/stores, this would be easier. > good for SIMT-like codes Certainly, but I've also seen/written a lot of vector length agnostic code using shuffles, which don't fit into the SIMT paradigm, which means that the scope is larger than just SIMT. --- As a general comparison, take AVX10/128, AVX10/256 and AVX10/512, overlap their instruction encodings, remove the few instructions that don't make sense anymore, and add a cheap instruction to query the vector length. (probably also instructions like vid and viota, for easier shuffle synthesization) Now you have a variable-length SIMD ISA that feels familiar. The above is basically what SVE is. | ||
▲ | jonstewart 18 hours ago | parent | prev [-] | |
A number of the cool string processing SIMD techniques depend a _lot_ on register widths and instruction performance characteristics. There’s a fair argument to be made that x64 could be made more consistent/legible for these use cases, but this isn’t matmul—whether you have 128, 256, or 512 bits matters hugely and you may want entirely different algorithms that are contingent on this. |