| ▲ | alfonsodev 3 hours ago |
| good to hear! Do you mind sharing your setup and tokens / seconds performance ? |
|
| ▲ | lreeves 2 hours ago | parent [-] |
| I'm running the unquantized base model on 2xA6000s (Ampere gen, 48GB each). Runs at about 25 tokens/second. |
| |
| ▲ | NitpickLawyer an hour ago | parent [-] | | FYI they also released FP8 quants, and those should be faster on your setup (we have the same). As long as you keep kv at 16bit, FP8 should be close-to-lossless compared to 16bit, but with more context available and faster inference speed. |
|