Remix.run Logo
cyberax 5 hours ago

Uhmm... I have a local Ollama setup on Linux+AMD, and it was only a bit more involved than this sample. And only because I wanted to run everything in a container.

If you mean that you can't just run the largest unquantized models, then it's indeed true.