Remix.run Logo
punnerud 2 hours ago

Could we all get bigger FPGAs and load the model onto it using the same technique?

generuso an hour ago | parent | next [-]

You could [1], but it is not very cheap -- the 32GB development board with the FPGA used in the article used to cost about $16K.

[1] https://arxiv.org/abs/2401.03868

wmf an hour ago | parent | prev | next [-]

FPGAs have really low density so that would be ridiculously inefficient, probably requiring ~100 FPGAs to load the model. You'd be better off with Groq.

menaerus an hour ago | parent [-]

Not sure what you're on but I think what you said is incorrect. You can use hi-density HBM-enabled FPGA with (LP)DDR5 with sufficient number of logic elements to implement the inference. Reason why we don't see it in action is most likely in the fact that such FPGAs are insanely expensive and not so available off-the-shelf as the GPUs are.

fercircularbuf 2 hours ago | parent | prev [-]

I thought about this exact question yesterday. Curious to know why we couldn't, if it isn't feasible. Would allow one to upgrade to the next model without fabricating all new hardware.