| ▲ | mrinterweb 8 hours ago |
| I've been wondering when we will see general purpose consumer FPGAs, and eventually ASICs, for inference. This reminds me of bitcoin mining. Bitcoin mining started with GPUs. I think I remember a brief FPGA period that transitioned to ASIC. My limited understanding of Google's tensor processing unit chips are that they are effectively a transformer ASIC. That's likely a wild over-simplification of Google's TPU, but Gemini is proof that GPUs are not needed for inference. I suspect GPU inference will come to an end soon, as it will likely be wildly inefficient by comparison to purpose built transformer chips. All those Nvidia GPU-based servers may become obsolete should transformer ASICs become mainstream. GPU bitcoin mining is just an absolute waste of money (cost of electricity) now. I believe the same will be true for GPU-based inference soon. The hundreds of billions of dollars being invested on GPU-based inference seems like an extremely risky bet that ASIC transformers won't happen, although Google has already widely deployed their own TPUs. |
|
| ▲ | fooblaster 7 hours ago | parent | next [-] |
| FPGAs will never rival gpus or TPUs for inference. The main reason is that GPUs aren't really gpus anymore. 50% of the die area or more is for fixed function matrix multiplication units and associated dedicated storage. This just isn't general purpose anymore. FPGAs cannot rival this with their configurable DSP slices. They would need dedicated systolic blocks, which they aren't getting. The closest thing is the versal ML tiles, and those are entire peoxessors, not FPGA blocks. Those have failed by being impossible to program. |
| |
| ▲ | fpgaminer 6 hours ago | parent | next [-] | | > FPGAs will never rival gpus or TPUs for inference. The main reason is that GPUs aren't really gpus anymore. Yeah. Even for Bitcoin mining GPUs dominated FPGAs. I created the Bitcoin mining FPGA project(s), and they were only interesting for two reasons: 1) they were far more power efficient, which in the case of mining changes the equation significantly. 2) GPUs at the time had poor binary math support, which hampered their performance; whereas an FPGA is just one giant binary math machine. | | |
| ▲ | beeflet 6 hours ago | parent [-] | | I have wondered if it is possible to make a mining algorithm FPGA-hard in the same way that RandomX is CPU-hard and memory-hard. Relative to CPUs, the "programming time" cost is high. Nice username btw. |
| |
| ▲ | Lerc 6 hours ago | parent | prev | next [-] | | I think it'll get to a point with quantisation that GPUs that run them will be more FPGA like than graphics renderers.
If you quantize far enough things begin to look more like gates than floating point units. At that level a FPGA wouldn't run your model, it would be one your model. | |
| ▲ | ithkuil 7 hours ago | parent | prev | next [-] | | Turns out that a lot of interesting computation can be expressed as a matrix multiplication. | | | |
| ▲ | dnautics 2 hours ago | parent | prev | next [-] | | I don't think this is correct. For inference, the bottleneck is memory bandwidth, so if you can hook up an FPGA with better memory, it has an outside shot at beating GPUs, at least in the short term. I mean, I have worked with FPGAs that outperform H200s in Llama3-class models a while and a half ago. | | |
| ▲ | fooblaster 2 hours ago | parent [-] | | Show me a single FPGA that can outperform a B200 at matrix multiplication (or even come close) at any usable precision. B200 can do 10 peta ops at fp8, theoretically. I do agree memory bandwidth is also a problem for most FPGA setups, but xilinx ships HBM with some skus and they are not competitive at inference as far as I know. |
| |
| ▲ | alanma 6 hours ago | parent | prev [-] | | yup, GBs are so much tensor core nowadays :) |
|
|
| ▲ | liuliu 3 hours ago | parent | prev | next [-] |
| This is a common misunderstanding from industry observers (not industry practitioners). Each generation of (NVIDIA) GPU is an ASIC with different ISA etc. Bitcoin mining simply was not important enough (last year, only $23B Bitcoin mined in total (at $100,000 per)). There is amped incentive to implement every possible instructions useful into GPU (without worrying about backward compatibility, thanks to PTX). ASIC transformers won't happen (defined as a chip with single instruction to do sdpa from anything that is not broadly marketed as GPU, and won't have annualized sale more than $3B). Mark my word. I am happy to take a bet on longbets.org with anyone on this for $1000 and my part will go to PSF. |
| |
|
| ▲ | bee_rider 6 hours ago | parent | prev | next [-] |
| There are also CPU extensions like AVX512-VNNI and AVX512-BF16. Maybe the idea of communicating out to a card that holds your model will eventually go away. Inference is not too memory bandwidth hungry, right? |
|
| ▲ | Narew 7 hours ago | parent | prev | next [-] |
| There was in the past.
Google had Coral TPU and Intel the Neural Compute Stick (NCS).
NCS is from 2018 so it's really outdated now.
It was mainly oriented for edge computing so the flops was not comparable to desktop computer. |
| |
| ▲ | moffkalast 6 hours ago | parent [-] | | Even for edge computing neither were really even capable of keeping up with the slowest Jetson's GPU for not much less power draw. |
|
|
| ▲ | tucnak 7 hours ago | parent | prev [-] |
| It all comes down to memory and fabric bandwidth. For example, the state of the art developer -friendly (PCIe 5.0) FPGA platform is Alveo V80 which rocks four 200G NIC's. Basically, Alveo currently occupies this niche where it's the only platform on the market to allow programmable in-network compute. However, what's available in terms of bandwidth—lags behind even pathetic platforms like Bluefield. Those in the know are aware of what challenges are there to actually saturate it for inference in practical designs. I think, Xilinx is super well-positioned here, but without some solid hard IP it's still a far cry from purpose silicon. |
| |
| ▲ | mrinterweb 7 hours ago | parent [-] | | As far as I understand all the inference purpose-build silicon out there is not being sold to competitors and kept in-house. Google's TPU, Amazon's Inferentia (horrible name), Microsoft's Maia, Meta's MTIA. It seems that custom inference silicon is a huge part of the AI game. I doubt GPU-based inference will be relevant/competitive soon. | | |
| ▲ | nightshift1 6 hours ago | parent | next [-] | | According to this semianalysis article, the Google/Broadcom TPU are being sold to others like Anthropic. https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-s... | |
| ▲ | nomel 6 hours ago | parent | prev | next [-] | | > It seems that custom inference silicon is a huge part of the AI game. Is there any public info about % inference on custom vs GPU, for these companies? | | |
| ▲ | mrinterweb 6 hours ago | parent [-] | | Gemini is likely the most widely used gen AI model in the world considering search, Android integration, and countless other integrations into the Google ecosystem. Gemini runs on their custom TPU chips. So I would say a large portion of inference is already using ASIC. https://cloud.google.com/tpu |
| |
| ▲ | almostgotcaught 6 hours ago | parent | prev [-] | | > soon When people say things like this I always wonder if they really think they're smarter than all of the people at Nvidia lolol | | |
| ▲ | mrinterweb 6 hours ago | parent [-] | | Soon was wrong. I should have said it is already happening. Google Gemini already uses their own TPU chips. Nvidia just dropped $20B to buy the IP for Groq's LPU (custom silicon for inference). $20B says Nvidia sees the writing on the wall for GPU-based inference. https://www.tomshardware.com/tech-industry/semiconductors/nv... | | |
| ▲ | almostgotcaught 5 hours ago | parent [-] | | There are so many people on here that are outsiders commenting way out of their depth: > Google Gemini already uses their own TPU chips Google has been using TPUs in prod for like a decade. |
|
|
|
|