▲ | redman25 6 days ago | |
You could always run your own server locally if you have a decent gpu. Some of the smaller LLMs are getting pretty good. | ||
▲ | theshrike79 4 days ago | parent | next [-] | |
Also M-series Macs have an insane price/performance/electricity consumption ratio in LLM use-cases. Any M-series Mac Mini can run a pretty good local model with usable speed. The high-end models easily compete with dedicated GPUs. | ||
▲ | n_ary 5 days ago | parent | prev | next [-] | |
Correct. My dusty Intel Nuc is able to run a decent 3B model(thanks to ollama) with fans spinning but does not affect any other running applications. It ks very useful for local hobby projects. Visible lags and freezes begin if I start a 5B+ model locally. | ||
▲ | stunnAR 4 days ago | parent | prev [-] | |
Yes - of course. That's been my experience with "ultimate" privacy. |