Remix.run Logo
mft_ 4 hours ago

> So how much internal memory does the latest Cerebras chip have? 44GB. This puts OpenAI in kind of an awkward position. 44GB is enough to fit a small model (~20B params at fp16, ~40B params at int8 quantization), but clearly not enough to fit GPT-5.3-Codex. That’s why they’re offering a brand new model, and why the Spark model has a bit of “small model smell” to it: it’s a smaller distil of the much larger GPT-5.3-Codex model.

This doesn't make sense.

1. Nvidia already sells e.g. the H100 with 80GB memory, so having 44GB isn't an advance, let alone a differentiator.

2. As I suspect anyone that's played with open weights models will attest, there's no way that 5.3-Codex-Spark is getting close to top-level performance and being sold in this way while being <44GB. Yes it's weaker and for sure it's probably a distil and smaller, but not by ~two orders of magnitude as suggested.

EdNutting 4 hours ago | parent | next [-]

You’re mixing up HBM and SRAM - which is an understandable confusion.

NVIDIA chips use HBM (High Bandwidth Memory) which is a form of DRAM - each bit is stored using a capacitor that has to be read and refreshed.

Most chips have caches on them built out of SRAM - a feedback loop of transistors that store each bit.

The big differences are in access time, power and density: SRAM is ~100 times faster than DRAM but DRAM uses much less power per gigabyte, and DRAM chips are much smaller per gigabyte of stored data.

Most processors have a few MB of SRAM as caches. Cerebras is kind of insane in that they’ve built one massive wafer-scale chip with a comparative ocean of SRAM (44GB).

In theory that gives them a big performance advantage over HBM-based chips.

As with any chip design though, it really isn’t that simple.

stingraycharles 4 hours ago | parent | next [-]

So what you’re saying is that Cerebras chips offer 44GB of what is comparable to L1 caches, while NVidia is offering 80GB of what is comparable to “fast DRAM” ?

EdNutting 4 hours ago | parent [-]

Sort of. But SRAM is not all made equal - L1 caches are small because they’re fast, and vice-versa L3 SRAM caches are slow because they’re big.

To address a large amount of SRAM requires an approximately log(N) amount of logic just to do the addressing (gross approximation). That extra logic takes time for a lookup operation to travel through, hence large = slow.

It’s also not one pool of SRAM. It’s thousands of small SRAM groups spread across the chip, with communication pathways in between.

So to have 44GB of SRAM is a very different architecture to 80GB of (unified) HBM (although even then that’s not true as most chips use multiple external memory interfaces).

HBM is high bandwidth. Whether that’s “fast” or not depends on the trade off between bandwidth and latency.

So, what I’m saying is this is way more complicated than it seems. But overall, yeah, Cerebras’ technical strategy is “big SRAM means more fast”, and they’ve not yet proven whether that’s technically true nor whether it makes economic sense.

SkiFire13 2 hours ago | parent | next [-]

> L1 caches are small because they’re fast

I guess you meant to say they are fast because they are small?

kittbuilds 3 hours ago | parent | prev [-]

[dead]

mft_ 2 hours ago | parent | prev [-]

Thanks, TIL.

aurareturn 4 hours ago | parent | prev [-]

It does make sense. Nvidia chips do not promise 1,000+ tokens/s. The 80GB is external HBM, unlike Cerebras’ 44GB internal SRAM.

The whole reason Cerebras can inference a model thousands of tokens per second is because it hosts the entire model in SRAM.

There are two possible scenarios for Codex Spark:

1. OpenAI designed a model to fit exactly 44GB.

2. OpenAI designed a model that require Cerebras to chain multiple wafer chips together; IE, an 88GB or 132GB or 176GB model or more.

Both options require the entire model to fit inside SRAM.

woadwarrior01 3 hours ago | parent [-]

Let's not forget the KV-cache which needs a lot of RAM too (although not as much as the model weights), and scales up linearly with sequence length.