▲ | jftuga 4 days ago | ||||||||||||||||
I have a macbook air M4 with 32 GB. What LM Studio models would you recommend for: * General Q&A * Specific to programming - mostly Python and Go. I forgot the command now, but I did run a command that allowed MacOS to allocate and use maybe 28 GB of RAM to the GPU for use with LLMs. | |||||||||||||||||
▲ | frontsideair 3 days ago | parent | next [-] | ||||||||||||||||
This is the command probably:
Source: https://github.com/ggml-org/llama.cpp/discussions/15396 | |||||||||||||||||
▲ | DrAwdeOccarim 4 days ago | parent | prev | next [-] | ||||||||||||||||
I adore Qwen 3 30b a3b 2507. Pretty easy to write an MCP to let us search the web with Brave API key. I run it on my Macbook Pro M3 Pro 36 GB. | |||||||||||||||||
| |||||||||||||||||
▲ | balder1991 4 days ago | parent | prev [-] | ||||||||||||||||
You’ll certainly find better answers on /r/LocalLlama in Reddit for this. |