Remix.run Logo
EagnaIonat a day ago

The more recent LLMs work fine on an M1 mac. Can't speak for Windows/Linux.

There was even a recent release of Granite4 that runs on a Raspberry Pi.

https://github.com/Jewelzufo/granitepi-4-nano

For my local work I use Ollama. (M4 Max 128GB)

- gpt-oss. 20b or 120b depending on complexity of use cases.

- granite4 for speed and lower complexity (around the same as gpt20b).