Remix.run Logo
minimaxir 4 hours ago

Price is unchanged from Gemini 3 Pro: $2/M input, $12/M output. https://ai.google.dev/gemini-api/docs/pricing

Knowledge cutoff is unchanged at Jan 2025. Gemini 3.1 Pro supports "medium" thinking where Gemini 3 did not: https://ai.google.dev/gemini-api/docs/gemini-3

Compare to Opus 4.6's $5/M input, $25/M output. If Gemini 3.1 Pro does indeed have similar performance, the price difference is notable.

rancar2 3 hours ago | parent | next [-]

If we don't see a huge gain on the long-term horizon thinking reflected with the Vendor-Bench 2, I'm not going to switch away from CC. Until Google can beat Anthropic on that front, Claude Code paired with the top long-horizon models will continue to pull away with full stack optimizations at every layer.

agentifysh an hour ago | parent | prev | next [-]

Looks like its cheaper than codex ??? this might be interesting then

TZubiri an hour ago | parent [-]

It's not trained for agentic coding I don't think

jbellis 3 hours ago | parent | prev | next [-]

still no minimal reasoning in G3.1P :(

(this is why Opus 4.6 is worth the price -- turning off thinking makes it 3x-5x faster but it loses only a small amount of intelligence. nobody else has figured that out yet)

sunaookami an hour ago | parent [-]

Thinking is just tacked on for Anthropic's models and always has been so leaving it off actually produces better results everytime.

oblio 38 minutes ago | parent | prev | next [-]

> Knowledge cutoff is unchanged at Jan 2025.

Isn't that a bit old?

minimaxir 34 minutes ago | parent [-]

Old relative to its competitors, but the Search tool can compensate for it.

4 hours ago | parent | prev | next [-]
[deleted]
plaidfuji 3 hours ago | parent | prev [-]

Sounds like the update is mostly system prompt + changes to orchestration / tool use around the core model, if the knowledge cutoff is unchanged

sigmar 3 hours ago | parent | next [-]

knowledge cutoff staying the same likely means they didn't do a new pre-train. We already knew there were plans from deepmind to integrate new RL changes in the post training of the weights. https://x.com/ankesh_anand/status/2002017859443233017

brokencode 3 hours ago | parent | prev [-]

This keeps getting repeated for all kinds of model releases, but isn’t necessarily true. It’s possible to make all kinds of changes without updating the pretraining data set. You can’t judge a model’s newness based on what it knows about.