| ▲ | tliltocatl 2 hours ago | |||||||
Not really. Itanium was a result of some people at Intel being obsessed by LINPACK benchmarks and forgetting everything else. It sucked for random memory access, and hence everything that's not floating-point number-crunching. Compiler can't hide memory access latency because it's fundamentally unpredictable. VLIW does magic for floating-point latency (which is predictable), but - As transistors got smaller, FP performance increased, memory latency stayed the same (or even increased). - If you are doing a lot of floating point, you are probably doing array processing, so might as well go for a GPU or at least SIMD). - Low instruction density is bad for I-cache. Yes, RISC fans, density matters! And VLIW is an absolute disaster in that regard. Again, this is less visible in number-crunching loads where the processor executes relatively small loops many times over. | ||||||||
| ▲ | fjjfnrnr an hour ago | parent [-] | |||||||
Naive question: shouldn't vliw be beneficial to memory access, since each instruction does quite a lot of work, thus giving the memory time to fetch the next instruction? | ||||||||
| ||||||||