| ▲ | randomtoast 11 hours ago |
| 0.2 tok/s is fine for experimentation, but it is not interactive in any meaningful sense. For many use cases, a well-quantized 8B or 13B that stays resident will simply deliver a better latency-quality tradeoff |
|
| ▲ | xaskasdf 9 hours ago | parent | next [-] |
| yeah, actually I wanted to see if this was possible at all. I managed to get around 3000 tokens/s on a ps2 with classic transformers, since the emotion engine is capable of 32 bit addresses, but it has like 32gb of ram. So I ran into the question of why was that fast and I couldn't get that speed even with small models, and the deal is that the instructions went right of the memory to the gpu and that's the main difference that does when a regular computer does inference: it has to request the instructions to the cpu every time. As I mentioned too, on professional cards you can avoid these problems naturally, since they got instructions precisely for this, but sadly I don't have 30k bucks to spare on a gpu :( |
| |
| ▲ | derstander 8 hours ago | parent | next [-] | | *32MB of RAM (plus 4MB of video RAM and a little sound and IOP memory). | |
| ▲ | eleventyseven 4 hours ago | parent | prev | next [-] | | > I don't have 30k bucks to spare on a gpu :( Do you have $2/hr to rent an RTX 6000 96GB or $5/hr for B200 180GB on the cloud? | | |
| ▲ | superkuh 4 hours ago | parent [-] | | I'd rather not give money to scalper barons if I can avoid it. Fab capacity is going to that for rental rather than hardware for humans. |
| |
| ▲ | anoncow 6 hours ago | parent | prev [-] | | 3000 tokens per sec on 32 mb Ram? | | |
| ▲ | fc417fc802 5 hours ago | parent [-] | | fast != practical You can get lots of tokens per second on the CPU if the entire network fits in L1 cache. Unfortunately the sub 64 kiB model segment isn't looking so hot. But actually ... 3000? Did GP misplace one or two zeros there? |
|
|
|
| ▲ | Wuzado 10 hours ago | parent | prev | next [-] |
| I can imagine a couple scenarios in which a high-quality, large model would be much preferred over lower latency models, primarily when you need the quality. |
|
| ▲ | fluoridation 6 hours ago | parent | prev | next [-] |
| That's slower than just running it off CPU+GPU. I can easily hit 1.5 tokens/s on a 7950X+3090 and a 20480-token context. |
|
| ▲ | tyfon 10 hours ago | parent | prev [-] |
| I didn't really understand the performance table until I saw the top ones were 8B models. But 5 seconds / token is quite slow yeah. I guess this is for low ram machines? I'm pretty sure my 5950x with 128 gb ram can run this faster on the CPU with some layers / prefill on the 3060 gpu I have. I also see that they claim the process is compute bound at 2 seconds/token, but that doesn't seem correct with a 3090? |
| |
| ▲ | tgrowazay 10 hours ago | parent [-] | | LLM speed is roughly <memory_bandwidth> / <model_size> tok/s. DDR4 tops out about 27Gbs DDR5 can do around 40Gbs So for 70B model at 8 bit quant, you will get around 0.3-0.5 tokens per second using RAM alone. | | |
| ▲ | uf00lme 9 hours ago | parent | next [-] | | Channels matter a lot, quad channel ddr4 is going to beat ddr5 in dual channel most of the time. | | |
| ▲ | wtallis 8 hours ago | parent [-] | | Four channels of DDR4-3200 vs two channels of DDR5-6400 (four subchannels) should come out pretty close. I don't see any reason why the DDR4 configuration would be consistently faster; you might have more bank groups on DDR4, but I'm not sure that would outweigh other factors like the topology and bandwidth of the interconnects between the memory controller and the CPU cores. |
| |
| ▲ | someguy2026 9 hours ago | parent | prev | next [-] | | DRAM speeds is one thing, but you should also account for the data rate of the PCIe bus (and/or VRAM speed). But yes, holding it "lukewarm" in DRAM rather than on NVMe storage is obviously faster. | |
| ▲ | vlovich123 10 hours ago | parent | prev | next [-] | | Faster than the 0.2tok/s this approach manages | |
| ▲ | zozbot234 9 hours ago | parent | prev | next [-] | | Should be active param size, not model size. | |
| ▲ | xaskasdf 8 hours ago | parent | prev [-] | | yeah, actually, I'm bottlenecked af since my mobo got pcie3 only :( |
|
|