Remix.run Logo
saltcured 5 days ago

It's all very circular, if you try to avoid the architecture-specific details of individual hardware designs. A SIMD "lane" is roughly equivalent to an ALU (arithmetic logic unit) in a conventional CPU design. Conceptually, it processes one primitive operation such as add, multiple, or FMA (fused-multiply-add) at a time on scalar values.

Each such scalar operation is on a fixed width primitive number, which is where we get into the questions of what numeric types the hardware supports. E.g. we used to worry about 32 vs 64 bit support in GPUs and now everything is worrying about smaller widths. Some image processing tasks benefit from 8 or 16 bit values. Lately, people are dipping into heavily quantized models that can benefit from even narrower values. The narrower values mean smaller memory footprint, but also generally mean that you can do more parallel operations with "similar" amounts of logic since each ALU processes fewer bits.

Where this lane==ALU analogy stumbles is when you get into all the details about how these ALUs are ganged together or in fact repartitioned on the fly. E.g. a SIMD group of lanes share some control signals and are not truly independent computation streams. Different memory architectures and superscalar designs also blur the ability to count computational throughput, as the number of operations that can retire per cycle becomes very task-dependent due to memory or port contention inside these beasts.

And if a system can reconfigure the lane width, it may effectively change a wide ALU into N logically smaller ALUs that reuse most of the same gates. Or, it might redirect some tasks to a completely different set of narrower hardware lanes that are otherwise idle. The dynamic ALU splitting was the conventional story around desktop SIMD, but I think is less true in modern designs. AFAICT, modern designs seem more likely to have some dedicated chip regions that go idle when they are not processing specific widths.