The name, yes, but going by name is a bad idea as the V in AVX also stands for Vector.
BTW, you'll be disappointed if you think of the P extension as something like SSE/AVX. The target for it is way lower power/perf, like a stripped-down MMX.
My point was about the underlying hardware implementation, specifically:
> "As shown in Figure 1-3, array processors scale performance spatially by replicating processing elements, while vector processors scale performance temporally by streaming data through pipelined functional units"
Applies to the hadware implementation, not the ISA, which is not made clear by the text.
You can implement AVX-512 with smaler data path then register width and "scale performance temporally by streaming data through pipelined functional units". Zen4 is a simple example of this, but there is nothing stopping you from implementing AVX-512 on top of heavily temporaly pipelined 64-bit wide execution units.
Similarly, you can implement RVV with a smaller data path than VLEN, but you can also implement it as a bog-standard SIMD processor.
The only thing that slightly complicates the comparison is LMUL, but it is fundamentally equivilant to unrolling.
The substantial difference between Vector and SIMD ISAs is imo only the existence of a vl-based predication mechanism.
If a SIMD ISA has a fixed register width or not, allowing you to write vector-length agnostic code, is an independent dimension of the ISA design. E
.g. the Cray-1 was without a doubt a Vector processor, but the vector registers on all compatible platforms had the exact same length. It did, however, have the mentioned vl-based predication mechanism.
You could take AVX10/128, AVX10/256 and AVX10/512, overlap their instruction encodings, and end up with a scalable SIMD ISA, for which you can write vector length agnostic code, but that doesn't make it a Vector ISA any more than it was before.