| ▲ | lostmsu 11 hours ago | |
Regular models are very fast if you do batch inference. GPT-OSS 20B gets close to 2k tok/s on a single 3090 at bs=64 (might be misremembering details here). | ||
| ▲ | rahimnathwani 8 hours ago | parent [-] | |
Right but everyone else is talking about latency, not throughput. | ||