Remix.run Logo
tyfon 10 hours ago

I didn't really understand the performance table until I saw the top ones were 8B models.

But 5 seconds / token is quite slow yeah. I guess this is for low ram machines? I'm pretty sure my 5950x with 128 gb ram can run this faster on the CPU with some layers / prefill on the 3060 gpu I have.

I also see that they claim the process is compute bound at 2 seconds/token, but that doesn't seem correct with a 3090?

tgrowazay 10 hours ago | parent [-]

LLM speed is roughly <memory_bandwidth> / <model_size> tok/s.

DDR4 tops out about 27Gbs

DDR5 can do around 40Gbs

So for 70B model at 8 bit quant, you will get around 0.3-0.5 tokens per second using RAM alone.

uf00lme 9 hours ago | parent | next [-]

Channels matter a lot, quad channel ddr4 is going to beat ddr5 in dual channel most of the time.

wtallis 8 hours ago | parent [-]

Four channels of DDR4-3200 vs two channels of DDR5-6400 (four subchannels) should come out pretty close. I don't see any reason why the DDR4 configuration would be consistently faster; you might have more bank groups on DDR4, but I'm not sure that would outweigh other factors like the topology and bandwidth of the interconnects between the memory controller and the CPU cores.

someguy2026 9 hours ago | parent | prev | next [-]

DRAM speeds is one thing, but you should also account for the data rate of the PCIe bus (and/or VRAM speed). But yes, holding it "lukewarm" in DRAM rather than on NVMe storage is obviously faster.

vlovich123 10 hours ago | parent | prev | next [-]

Faster than the 0.2tok/s this approach manages

zozbot234 9 hours ago | parent | prev | next [-]

Should be active param size, not model size.

xaskasdf 8 hours ago | parent | prev [-]

yeah, actually, I'm bottlenecked af since my mobo got pcie3 only :(