▲ | GeekyBear 4 days ago | |
Now that we know that Apple has added tensor units to the GPU cores the M5 series of chips will be using, I might be asking myself if I couldn't wait a bit. | ||
▲ | t1amat 3 days ago | parent [-] | |
This is the right take. You might be able to get decent (2-3x less than a GPU rig) token generation, which is adequate, but your prompt processing speeds are more like 50-100x slower. A hardware solution is needed to make long context actually usable on a Mac. |