| ▲ | om8 2 days ago |
| To have a gpu inference, you need a gpu. I have a demo that runs 8B llama on any computer with 4 gigs of ram https://galqiwi.github.io/aqlm-rs/about.html |
|
| ▲ | adastra22 2 days ago | parent [-] |
| Any computer with a display has a GPU. |
| |
| ▲ | om8 a day ago | parent [-] | | Sure, but integrated graphics usually lacks vram for LLM inference. | | |
| ▲ | adastra22 a day ago | parent [-] | | Which means that inference would be approximately the same speed (but compute offloaded) as the suggested CPU inference engine. | | |
|
|