Remix.run Logo
ColonelPhantom a day ago

GPT-OSS is tailored to be extremely memory efficient. Not only is it natively using the 4.25 bit per token MXFP4 format, but it also uses sliding window attention for half of its layers. It also doesn't have that many layers, only 36 for the 120B version and 24 for the 120B version. (The 120B is also much much sparser than the 20B.)

I found a Reddit comment claiming only 36 KiB per token. With that, half a million tokens fits in 18 GB, which is less than one GPU. And three GPUs fit the parameters with room to spare (64 out of 72 GB).