Remix.run Logo
johndough 4 hours ago

> So the size of GPT-5.3-Codex-Spark isn't limited by the memory of a single Cerebras chip, but the number of such chips that you can chain together and still hit the 1000 tokens per second target.

Chaining chips does not decrease token throughput. In theory, you could run models of any size on Cerebras chips. See for example Groq's (not to be confused with Grok) chips, which only have 230 MB SRAM, yet manage to run Kimi K2.

EdNutting 4 hours ago | parent [-]

Only if chip-to-chip communication is as fast as on-chip communication. Which it isn’t.

johndough 3 hours ago | parent | next [-]

Only if chip-to-chip communication was a bottleneck. Which it isn't.

If a layer completely fits in SRAM (as is probably the case for Cerebras), you only have to communicate the hidden states between chips for each token. The hidden states are very small (7168 floats for DeepSeek-V3.2 https://huggingface.co/deepseek-ai/DeepSeek-V3.2/blob/main/c... ), which won't be a bottleneck.

Things get more complicated if a layer does not fit in SRAM, but it still works out fine in the end.

littlestymaar 3 hours ago | parent | prev [-]

It doesn't need to, during inference there's little data exchange between one chip and another (just a single embedding vector per token).

It's completely different during training because of the backward pass and weight update, which put a lot of strain on the inter-chip communication, but during inference even x4 PCIe4.0 is enough to connect GPUs together and not lose speed.