▲ | londons_explore 2 days ago | |
I suspect because current GPU hardware can't efficiently train such low bit depth models. You end up needing activations to use 8 or 16 bits in all the data paths, and don't get any more throughput per cycle on the multiplications than you would have done with FP32. Custom silicon would solve that, but nobody wants to build custom silicon for a data format that will go out of fashion before the production run is done. | ||
▲ | zamadatix 2 days ago | parent | next [-] | |
The custom CUDA kernel for 4-in-8 seems to have come out better than a naive approach (such as just treating each as an fp8/int8) + it lowers memory bandwidth. Custom hardware would certainly make that improvement even better but I don't think that's what's limiting training to 2-8 billion parameters as much as something like research convenience while the groundwork for this type of model is still being figured out. | ||
▲ | Havoc 2 days ago | parent | prev [-] | |
Makes sense. Might be good for mem throughput constrained devices though so hoping it’ll pick up |