▲ | johnisgood 6 days ago | |||||||
Hardware requirements? "What you need" only includes software requirements. | ||||||||
▲ | DrAwdeOccarim 6 days ago | parent | next [-] | |||||||
The author says 36GB unified ram in the article. I run the same memory M3 Pro and LM Studio daily with various models up to the 30b parameter one listed and it flies. Can’t differentiate between my OAi chats vs locals aside from modern context, though I have puppeteer MCP which works well for web search and site-reading. | ||||||||
▲ | jszymborski 6 days ago | parent | prev | next [-] | |||||||
30B runs at a reasonable speed on my desktop which has an RTX 2080 (8gb VRAM) and 32Gb of RAM. | ||||||||
▲ | Havoc 6 days ago | parent | prev | next [-] | |||||||
30B class model should run on a consumer 24gb card when quantised though would need pretty aggressive quant to make room for context. Don’t think you’ll get the full 256k context though So about 700 bucks for a 3090 on eBay | ||||||||
| ||||||||
▲ | thecolorblue 6 days ago | parent | prev [-] | |||||||
I am running it on an M1 Max. |