| ▲ | textlapse 2 hours ago | |||||||
warp is expensive - essentially it's running a 'don't run code' to maintain SIMT. GPUs are still not practically-Turing-complete in the sense that there are strict restrictions on loops/goto/IO/waiting (there are a bunch of band-aids to make it pretend it's not a functional programming model). So I am not sure retrofitting a Ferrari to cosplay an Amazon delivery van is useful other than for tech showcase? Good tech showcase though :) | ||||||||
| ▲ | zozbot234 an hour ago | parent [-] | |||||||
I think you're conflating GPU 'threads' and 'warps'. GPU 'threads' are SIMD lanes that are all running with the exact same instructions and control flow (only with different filtering/predication), whereas GPU warps are hardware-level threads that run on a single compute unit. There's no issue with adding extra "don't run code" when using warps, unlike GPU threads. | ||||||||
| ||||||||