Remix.run Logo
ronb1964 18 hours ago

I have Ollama installed on my Linux desktop with Alpaca as the frontend, but honestly I haven't done much with it beyond poking around. I also built a local speech-to-text app using Claude Code that runs Whisper offline, so I'm clearly drawn to the idea of keeping AI on-device. I'm curious whether Gemma 4 would be a noticeable step up for someone just using a local model for everyday tasks...writing, Q&A, that kind of thing. Is there a practical size recommendation for someone who isn't doing anything exotic, just wants a capable local model that doesn't require a supercomputer? And is there an advantage to having all this work with Claude somehow to broaden what is currently capable?