| ▲ | ACCount37 5 hours ago | |
What uses, exactly? The prototype is: silicon with a Llama 3.1 8B etched into it. Today's 4B models already outperform it. Token rate in five digits is a major technical flex, but, does anyone really need to run a very dumb model at this speed? The only things that come to mind that could reap a benefit are: asymmetric exotics like VLA action policies and voice stages for V2V models. Both of which are "small fast low latency model backed by a large smart model", and both depend on model to model comms, which this doesn't demonstrate. In a way, it's an I/O accelerator rather than an inference engine. At best. | ||
| ▲ | MITSardine 4 hours ago | parent | next [-] | |
With LLMs this fast, you could imagine using them as any old function in programs. | ||
| ▲ | leoedin 4 hours ago | parent | prev [-] | |
Even if this first generation is not useful, the learning and architecture decisions in this generation will be. You really can't think of any value to having a chip which can run LLMs at high speed and locally for 1/10 of the energy budget and (presumably) significantly lower cost than a GPU? If you look at any development in computing, ASICs are the next step. It seems almost inevitable. Yes, it will always trail behind state of the art. But value will come quickly in a few generations. | ||