Remix.run Logo
grim_io 2 hours ago

What do you mean it's on ollama and requires h100? As a proprietary google model, it runs on their own hardware, not nvidia.

KaiserPro 2 hours ago | parent [-]

sorry A lack of context:

https://ollama.com/library/gemini-3-pro-preview

You can run it on your own infra. Anthropic and openAI are running off nvidia, so are meta(well supposedly they had custom silicon, I'm not sure if its capable of running big models) and mistral.

however if google really are running their own inference hardware, then that means the cost is different (developing silicon is not cheap...) as you say.

simonw 39 minutes ago | parent | next [-]

You can't run Gemini 3 Pro Preview on your own infrastructure. Ollama sell access to cloud models these days. It's a little weird and confusing.

zozbot234 2 hours ago | parent | prev [-]

That's a cloud-linked model. It's about using ollama as an API client (for ease of compatibility with other uses, including local), not running that model on local infra. Google does release open models (called Gemma) but they're not nearly as capable.