Remix.run Logo
Gemini 3.1 Pro(deepmind.google)
499 points by PunchTornado 3 hours ago | 342 comments
simonw 3 hours ago | parent | next [-]

Pretty great pelican: https://simonwillison.net/2026/Feb/19/gemini-31-pro/ - took over 5 minutes though, but I think that's because they're having performance teething problems on launch day.

jasonjmcghee an hour ago | parent | next [-]

What's crazy is you've influenced them to spend real effort ensuring their model is good at generating animated svgs of animals operating vehicles.

The most absurd benchmaxxing.

https://x.com/jeffdean/status/2024525132266688757?s=46&t=ZjF...

casey2 a few seconds ago | parent | next [-]

You don't have to benchmax everything, just the benchmarks in the right social circles

threatofrain 20 minutes ago | parent | prev | next [-]

Animated SVG is huge. People in different professions are worrying to different degrees in terms of being replaced by ML, but this one is huge with regards to digital art.

eurekin 37 minutes ago | parent | prev | next [-]

Can't wait until they finally get to real world CAD

tngranados 20 minutes ago | parent [-]

There's a CAD example in that same thread: https://x.com/JeffDean/status/2024528776856817813

gnatolf 37 minutes ago | parent | prev | next [-]

So let's put things we're interested in in the benchmarks.

I'm not against pelicans!

tantalor 43 minutes ago | parent | prev | next [-]

He's svg-mogging

UltraSane 29 minutes ago | parent | prev [-]

It if funny to think that Jeff Dean personally worked to optimize the pelican riding a bike benchmark.

embedding-shape 3 hours ago | parent | prev | next [-]

It's an excellent demonstration of the main issue I have with the Gemini family of models, they always go "above and beyond" to do a lot of stuff, even if I explicitly prompt against it. In this case, most of the SVG ends up consisting not just of a bike and a pelican, but clouds, a sun, a hat on the pelican and so much more.

Exactly the same thing happens when you code, it's almost impossible to get Gemini to not do "helpful" drive-by-refactors, and it keeps adding code comments no matter what I say. Very frustrating experience overall.

mullingitover 2 hours ago | parent | next [-]

> it's almost impossible to get Gemini to not do "helpful" drive-by-refactors

Just asking "Explain what this service does?" turns into

[No response for three minutes...]

+729 -522

cowmoo728 2 hours ago | parent | next [-]

it's also so aggressive about taking out debug log statements and in-progress code. I'll ask it to fill in a new function somewhere else and it will remove all of the half written code from the piece I'm currently working on.

chankstein38 2 hours ago | parent [-]

I ended up adding a "NEVER REMOVE LOGGING OR DEBUGGING INFO, OPT TO ADD MORE OF IT" to my user instructions and that has _somewhat_ fixed the problem but introduced a new problem where, no matter what I'm talking to it about, it tries to add logging. Even if it's not a code problem. I've had it explain that I could setup an ESP32 with a sensor so that I could get logging from it then write me firmware for it.

sd9 an hour ago | parent | next [-]

If it's adding too much logging now, have you tried softening the instruction about adding more?

"NEVER REMOVE LOGGING OR DEBUGGING INFO. If unsure, bias towards introducing sensible logging."

Or just

"NEVER REMOVE LOGGING OR DEBUGGING INFO."

bratwurst3000 2 hours ago | parent | prev [-]

"I've had it explain that I could setup an ESP32 with a sensor so that I could get logging from it then write me firmware for it." lol did you try it? This so far from everything ratinonal

BartShoot an hour ago | parent | prev | next [-]

if you had to ask it obviously needs to refactor code for clarity so next person does not need to ask

kylec 2 hours ago | parent | prev | next [-]

"I don't know what did it, but here's what it does now"

quotemstr 2 hours ago | parent | prev [-]

What. You don't have yours ask for edit approval?

mullingitover 10 minutes ago | parent | next [-]

Ask mode exists, I think the models work on the assumption that if you're allowing edits then of course you must want edits.

embedding-shape an hour ago | parent | prev [-]

Who has time for that? This is how I run codex: `codex --sandbox danger-full-access --dangerously-bypass-approvals-and-sandbox --search exec "$PROMPT"`, having to approve each change would effectively destroy the entire point of using an agent, at least for me.

Edit: obviously inside something so it doesn't have access to the rest of my system, but enough access to be useful.

quotemstr an hour ago | parent [-]

I wouldn't even think of letting an agent work in that made. Even the best of them produce garbage code unless I keep them on a tight leash. And no, not a skill issue.

What I don't have time to do is debug obvious slop.

kees99 24 minutes ago | parent [-]

I ended up running codex with all the "danger" flags, but in a throw-away VM with copy-on-write access to code folders.

Built-in approval thing sounds like a good idea, but in practice it's unusable. Typical session for me was like:

  About to run "sed -n '1,100p' example.cpp", approve?
  About to run "sed -n '100,200p' example.cpp", approve?
  About to run "sed -n '200,300p' example.cpp", approve?
Could very well be a skill issue, but that was mighty annoying, and with no obvious fix (options "don't ask again for ...." were not helping).
enobrev 3 hours ago | parent | prev | next [-]

I have the same issue. Even when I ask it to do code-reviews and very explicitly tell it not to change files, it will occasionally just start "fixing" things.

mikepurvis 2 hours ago | parent [-]

I find Copilot leans the other way. It'll myopically focus its work in the exact function I point it at, even when it's clear that adding a new helper would be a logical abstraction to share behaviour with the function right beside it.

Overall, I think it's probably better that it stay focused, and allow me to prompt it with "hey, go ahead and refactor these two functions" rather than the other way around. At the same time, really the ideal would be to have it proactively ask, or even pitch the refactor as a colleague would, like "based on what I see of this function, it would make most sense to XYZ, do you think that makes sense? <sure go ahead> <no just keep it a minimal change>"

Or perhaps even better, simply pursue both changes in parallel and present them as A/B options for the human reviewer to select between.

msteffen 17 minutes ago | parent | prev | next [-]

> it's almost impossible to get Gemini to not do "helpful" drive-by-refactors

Not like human programmers. I would never do this and have never struggled with it in the past, no...

embedding-shape 3 minutes ago | parent [-]

Fairer comparison would be against other models, which are typically better at instruction following. You say "don't change anything not explicitly mentioned" or "Don't add any new code comments" and they tend to follow that.

h14h an hour ago | parent | prev | next [-]

Would be really interesting to see an "Eager McBeaver" bench around this concept. When doing real work, a model's ability to stay within the bounds of a given task has almost become more important than its raw capabilities now that every frontier model is so dang good.

Every one of these models is so great at propelling the ship forward, that I increasingly care more and more about which models are the easiest to steer in the direction I actually want to go.

cglan 35 minutes ago | parent [-]

being TOO steerable is another issue though.

Codex is very steerable to a fault, and will gladly "monkey paw" your requests to a fault.

Claude Opus will ignore your instructions and do what it thinks is "right" and just barrel forward.

Both are bad and papering over the actual issue which is these models don't really have the ability to actually selectively choose their behavior per issue (ie ask for followup where needed, ignore users where needed, follow instructions where needed). Behavior is largely global

kees99 16 minutes ago | parent [-]

I my experience Claude gradually stops being opinionated as task at hand becomes more arcane. I frequently add "treat the above as a suggestion, and don't hesitate to push back" to change requests, and it seems to help quite a bit.

neya 2 hours ago | parent | prev | next [-]

> it's almost impossible to get Gemini to not do "helpful" drive-by-refactors

This has not been my experience. I do Elixir primarily and Gemini has helped build some really cool products and massive refactors along the way. And it would even pick up security issues and potential optimizations along the way

What HAS been an issue constantly though was randomly the model will absolutely not respond at all and some random error would occur which is embarrassing for a company like Google with the infrastructure they own.

embedding-shape 2 hours ago | parent [-]

Out of curiosity, do you have any public projects (with public source code) you've made exclusively with Gemini, so one could take a look? I've tried a bunch of times to use Gemini to at least finish something small but I always end up sufficiently frustrated to abort it as the instruction-following seems so bad.

apitman 38 minutes ago | parent | prev | next [-]

This matches my experience using Gemini CLI to code. It would also frequently get stuck in loops. It was so bad compared to Codex that I feel like I must have been doing something fundamentally wrong.

tyfon 2 hours ago | parent | prev | next [-]

I was using gemini antigravity in opencode a few weeks ago before they started banning everyone for that and I got into the habit of writing "do x, then wait for instructions".

That helped quite a bit but it would still go off on it's own from time to time.

JLCarveth an hour ago | parent | prev | next [-]

Every time I have tried using `gemini-cli` it just thinks endlessly and never actually gives a response.

gavinray 3 hours ago | parent | prev | next [-]

Do you have Personalization Instructions set up for your LLM models?

You can make their responses fairly dry/brief.

embedding-shape 3 hours ago | parent | next [-]

I'm mostly using them via my own harnesses, so I have full control of the system prompts and so on. And no matter what I try, Gemini keeps "helpfully" adding code comments every now and then. With every other model, "- Don't add code comments" tends to be enough, but with Gemini I'm not sure how I could stop the comments from eventually appearing.

WarmWash 2 hours ago | parent [-]

I'm pretty sure it writes comments for itself, not for the user. I always let the models comment as much as they want, because I feel it makes the context more robust, especially when cycling contexts often to keep them fresh.

There is a tradeoff though, as comments do consumer context. But I tend to pretty liberally dispense of instances and start with a fresh window.

embedding-shape 2 hours ago | parent [-]

> I'm pretty sure it writes comments for itself, not for the user

Yeah, that sounds worse than "trying to helpful". Read the code instead, why add indirection in that way, just to be able to understand what other models understand without comments?

metal_am 2 hours ago | parent | prev [-]

I'd love to hear some examples!

gavinray 2 hours ago | parent [-]

I use LLM's outside of work primarily for research on academic topics, so mine is:

  Be a proactive research partner: challenge flawed or unproven ideas with evidence; identify inefficiencies and suggest better alternatives with reasoning; question assumptions to deepen inquiry.
zengineer 2 hours ago | parent | prev [-]

true, whenever I ask Gemini to help me with a prompt for generating an image of XYZ, it generates the image.

MrCheeze 2 hours ago | parent | prev | next [-]

Does anyone understand why LLMs have gotten so good at this? Their ability to generate accurate SVG shapes seems to greatly outshine what I would expect, given their mediocre spatial understanding in other contexts.

tedsanders 26 minutes ago | parent | next [-]

A few thoughts:

- One thing to be aware of is that LLMs can be much smarter than their ability to articulate that intelligence in words. For example, GPT-3.5 Turbo was beastly at chess (1800 elo?) when prompted to complete PGN transcripts, but if you asked it questions in chat, its knowledge was abysmal. LLMs don't generalize as well as humans, and sometimes they can have the ability to do tasks without the ability to articulate things that feel essential to the tasks (like answering whether the bicycle is facing left or right).

- Secondly, what has made AI labs so bullish on future progress over the past few years is that they see how little work it takes to get their results. Often, if an LLM sucks at something that's because no one worked on it (not always, of course). If you directly train a skill, you can see giant leaps in ability with fairly small effort. Big leaps in SVG creation could be coming from relatively small targeted efforts, where none existed before.

simonw 2 hours ago | parent | prev | next [-]

My best guess is that the labs put a lot of work into HTML and CSS spatial stuff because web frontend is such an important application of the models, and those improvements leaked through to SVG as well.

pknerd an hour ago | parent | prev [-]

> Does anyone understand why LLMs have gotten so good at this?

Added more IF/THEN/ELSE conditions.

kridsdale3 an hour ago | parent [-]

More wires and jumpers on the breadboard.

sam_1421 3 hours ago | parent | prev | next [-]

Models are soon going to start benchmaxxing generating SVGs of pelicans on bikes

cbsks 2 hours ago | parent | next [-]

That’s Simon’s goal. “All I’ve ever wanted from life is a genuinely great SVG vector illustration of a pelican riding a bicycle. My dastardly multi-year plan is to trick multiple AI labs into investing vast resources to cheat at my benchmark until I get one.”

https://simonwillison.net/2025/Nov/13/training-for-pelicans-...

travisgriggs an hour ago | parent [-]

So once that's achieved, I wonder how well it deals with unsuspected variations. E.g.

"Give me an illustration of a bicycle riding by a pelican"

"Give me an illustration of a bicycle riding over a pelican"

"Give me an illustration of a bicycle riding under a flying pelican"

So on and so forth. Or will it start to look like the Studio C sketch about Lobster Bisque: https://youtu.be/A2KCGQhVRTE

embedding-shape 3 hours ago | parent | prev | next [-]

Soon? I'd be willing to bet it's been included in the training set at least 6 months by now. Not so obvious so it generates always perfect pelicans on bikes, but sufficiently for the "minibench" to be less useful today than in the past.

jsheard 3 hours ago | parent | prev | next [-]

Simons been doing this exact test for nearly 18 months now, if vendors want to benchmaxx it then they've had more than enough time to do so already.

stri8ted 2 hours ago | parent [-]

Exactly. As far as I'm concerned, the benchmark is useless. It's way too easy and rewarding to train on it.

bonoboTP 8 minutes ago | parent | next [-]

It's just an in-joke, he doesn't intend it as a serious benchmark anymore. I think it's funny.

Legend2440 2 hours ago | parent | prev | next [-]

Y'all are way too skeptical, no matter what cool thing AI does you'll make up an excuse for how they must somehow be cheating.

toraway 26 minutes ago | parent | next [-]

Jeff Dean literally featured it in a tweet announcing the model. Personally it feels absurd to believe they've put absolutely no thought into optimizing this type of SVG output given the disproportionate amount of attention devoted to a specific test for 1 yr+.

I wouldn't really even call it "cheating" since it has improved models' ability to generate artistic SVG imagery more broadly but the days of this being an effective way to evaluate a model's "interdisciplinary" visual reasoning abilities have long since passed, IMO.

It's become yet another example in the ever growing list of benchmaxxed targets whose original purpose was defeated by teaching to the test.

https://x.com/jeffdean/status/2024525132266688757?s=46&t=ZjF...

arcatech an hour ago | parent | prev [-]

Or maybe you’re too trusting of companies who have already proven to not be trustworthy?

pixl97 2 hours ago | parent | prev [-]

I mean if you want to make your own benchmark, simply don't make it public and don't do it often. If your salamander on skis or whatever gets better with time it likely has nothing to do with being benchmaxxed.

ks2048 15 minutes ago | parent | prev [-]

Forget the paperclip maximizer - AGI will turn the whole world into pelicans on bikes.

SoKamil 2 hours ago | parent | prev | next [-]

It seems they trained the model to output good svg’s.

In their blog post[1], first use case they mention is svg generation. Thus, it might not be any indicator at all anymore.

[1] https://blog.google/innovation-and-ai/models-and-research/ge...

Arcuru 3 hours ago | parent | prev | next [-]

Did you stop using the more detailed prompt? I think you described it here: https://simonwillison.net/2025/Nov/18/gemini-3/

simonw 2 hours ago | parent [-]

It seems to be having capacity problems right now but I'll run that as soon as I can get it to work.

WarmWash 3 hours ago | parent | prev | next [-]

Less pretty and more practical, it's really good at outputting circuit designs as SVG schematics.

https://www.svgviewer.dev/s/dEdbH8Sw

InitialLastName 2 hours ago | parent | next [-]

I don't know what of this is the prompt and what was the output, but that's a pretty bad schematic (for both aesthetic and circuit-design reasons).

WarmWash 2 hours ago | parent | next [-]

The prompts were doing the design, reference voltage, hysteresis, output stage, all the maths and then the SVG is from asking the model to take all that and the current BOM to make an SVG schematic of it. In the past models would just output totally incoherent messes of lines and shapes.

I did a larger circuit too that this is part of, but it's not really for sharing online.

svnt 2 hours ago | parent | prev [-]

Yes but you concede it is a schematic.

tadfisher 30 minutes ago | parent [-]

How far we have come!

0_____0 2 hours ago | parent | prev [-]

that's pretty amazing for an LLM but as an EE, if my intern did this i would sigh inwardly and pull up some existing schematics for some brief guidance on symbol layout.

steve_adams_86 3 hours ago | parent | prev | next [-]

Ugh, the gears and chain don't mesh and there's no sprocket on the rear hub

But seriously, I can't believe LLMs are able to one-shot a pelican on a bicycle this well. I wouldn't have guessed this was going to emerge as a capability from LLMs 6 years ago. I see why it does now, but... It still amazes me that they're so good at some things.

emp17344 2 hours ago | parent | next [-]

Is this capability “emergent”, or do AI firms specifically target SVG generation in order to improve it? How would we be able to tell?

steve_adams_86 an hour ago | parent | next [-]

I asked myself the same thing as I typed that comment, and I'm not sure what the answer is. I don't think models are specifically trained on this (though of course they're trained on how to generate SVGs in general), but I'm prepared to be wrong.

I have a feeling the most 'emergent' aspect was that LLMs have generally been able to produce coherent SVG for quite a while, likely without specific training at first. Since then I suspect there has been more tailored training because improvements have been so dramatic. Of course it makes sense that text-based images using very distinct structure and properties could be manipulated reasonably well by a text-based language model, but it's still fascinating to me just how well it can work.

Perhaps what's most incredible about it is how versatile human language is, even when it lacks so many dimensions as bits on a machine. Yet it's still cool that we can resurrect those bits at rest and transmogrify them back into coherent projections of photons from a screen.

I don't think LLMs are AGI or about to completely flip the world upside down or whatever, but it seems undeniably magical when you break it down.

simonw 2 hours ago | parent | prev [-]

Google specifically boast about their SVG performance in the announcement post: https://blog.google/innovation-and-ai/models-and-research/ge...

You can try any combination of animal on vehicle to confirm that they likely didn't target pelicans directly though.

0_____0 2 hours ago | parent | prev | next [-]

next time you host a party, have people try to draw a bicycle on your whiteboard (you have a whiteboard in your house right? you should, anyway...)

human adults are generally quite bad at drawing them, unless they spend a lot of time actually thinking about bicycles as objects

542354234235 2 hours ago | parent | next [-]

They are, and it is very funny.

https://www.behance.net/gallery/35437979/Velocipedia

iammattmurphy an hour ago | parent [-]

Fantastic post, thanks for that.

emp17344 2 hours ago | parent | prev [-]

What’s your point? Yes, humans fail sometimes, as do AI models. Are you trying to imply that, in light of this, AI is now as capable as human beings? If so, that conclusion doesn’t follow logically.

0_____0 2 hours ago | parent [-]

it's not a loaded point, i just think it's funny that humans typically cannot one-shot this. and it will make your friends laugh

HPsquared 2 hours ago | parent | prev [-]

And the left leg is straight while the right leg is bent.

EDIT: And the chain should pass behind the seat stay.

tarr11 an hour ago | parent | prev | next [-]

What do you think this particular prompt is evaluating for?

The more popular these particular evals are, the more likely the model will be trained for them.

Gander5739 an hour ago | parent [-]

Sea https://simonwillison.net/2025/Nov/13/training-for-pelicans-...

AmazingTurtle 2 hours ago | parent | prev | next [-]

At this point, the pelican benchmark became so widely used that there must be high quality pelicans in the dataset, I presume. What about generating an okapi on a bicycle instead?

ascorbic 9 minutes ago | parent | next [-]

Loads of examples here https://x.com/jeffdean/status/2024525132266688757

tromp 2 hours ago | parent | prev [-]

Or, even more challenging, an okapi on a recumbent ?!

bredren 3 hours ago | parent | prev | next [-]

What is that, a snack in the basket?

sigmar 3 hours ago | parent | next [-]

"integrating a bicycle basket, complete with a fish for the pelican... also ensuring the basket is on top of the bike, and that the fish is correctly positioned with its head up... basket is orange, with a fish inside for fun."

how thoughtful of the ai to include a snack. truly a "thanks for all the fish"

defen 2 hours ago | parent [-]

A pelican already has an integrated snack-holder, though. It wouldn't need to put it in the basket.

WarmWash 3 hours ago | parent | prev [-]

A fish for the road

infthi 2 hours ago | parent | prev | next [-]

Wonder when will we get something other than a side view

mikepurvis 2 hours ago | parent [-]

That would be a especially challenging for vector output. I tried just now on ChatGPT 5.2 to jump straight to an image, with this prompt:

"make me a cartoon image of a pelican riding a bicycle, but make it from a front 3/4 view, that is riding toward the viewer."

The result was basically a head-on view, but I expect if you then put that back in and said, "take this image and vectorize it as an SVG" you'd have a much better time than trying to one-shot the SVG directly from a description.

... but of course, if that's so, then what's preventing the model from being smart enough to identify this workflow and follow it on its own to get the task completed?

calny 3 hours ago | parent | prev | next [-]

Great pelican but what’s up with that fish in the basket?

coldtea 3 hours ago | parent | next [-]

It's a pelican. What do you expect a pelican to have in his bike's basket?

It's a pretty funny and coherent touch!

embedding-shape 2 hours ago | parent [-]

> What do you expect a pelican to have in his bike's basket?

Probably stuff it cannot fit in the gullet, or don't want there (think trash). I wouldn't expect a pelican to stash fish there, that's for sure.

kridsdale3 an hour ago | parent [-]

You never travel with a snack fish for later on? He's going to be burning calories.

nicr_22 an hour ago | parent | prev | next [-]

Yeah, why only _one_ fish?

It's obvious that pelican is riding long distance, no way a single fish is sufficiently energy dense for more than a few miles.

Can't the model do basic math???

gavinray 3 hours ago | parent | prev [-]

Where else are cycling Pelican's meant to keep their fish?

calny 2 hours ago | parent [-]

I get it, I just meant the fish is poorly done, when I’d have guessed it would be relatively simple part. Maybe the black dot eye is misplaced idk.

DonHopkins an hour ago | parent | prev | next [-]

How about STL files for 3d printing pelicans!

baq 33 minutes ago | parent [-]

Harder: the bike must work

Hardest: the pelican must work

mohsen1 3 hours ago | parent | prev | next [-]

is there something in your prompt about hats? why the pelican always wearing a hat recently?!

bigfishrunning 3 hours ago | parent [-]

At this point, i think maybe they're training on all of the previous pelicans, and one of them decided to put a hat on it?

Disclaimer: This is an unsubstantiated claim that i made up

xnx 3 hours ago | parent | prev | next [-]

Not even animated? This is 2026.

readitalready 3 hours ago | parent [-]

Jeff Dean just posted an animated version: https://x.com/JeffDean/status/2024525132266688757

benbreen 2 hours ago | parent | next [-]

One underrated thing about the recent frontier models, IMO, is that they are obviating the need for image gen as a standalone thing. Opus 4.6 (and apparently 3.1 Pro as well) doesn't have the ability to generate images but it is so good at making SVG that it basically doesn't matter at this point. And the benefit of SVG is that it can be animated and interactive.

I find this fascinating because it literally just happened in the past few months. Up until ~summer of 2025, the SVG these models made was consistently buggy and crude. By December of 2026, I was able to get results like this from Opus 4.5 (Henry James: the RPG, made almost entirely with SVG): https://the-ambassadors.vercel.app

And now it looks like Gemini 3.1 Pro has vaulted past it.

embedding-shape 2 hours ago | parent | next [-]

> doesn't have the ability to generate images but it is so good at making SVG that it basically doesn't matter at this point

Yeah, since the invention of vector images, suddenly no one cares about raster images anymore.

Obviously not true, but that's how your comment reads right now. "Image" is very different from "Image", and one doesn't automagically replace the other.

buu700 2 hours ago | parent | next [-]

This reminds me of the time I printed a poster with a blown up version of some image for a high school history project. A classmate asked how I did it, so I started going on about how I used software to vectorize the image. Turned out he didn't care about any of that and just wanted the name of the print shop.

Der_Einzige an hour ago | parent | prev [-]

You have no idea how badly I want to be teleported to the alternative world where VECTOR COMPUTING was the dominant form of computers.

We had high framerate (yes it was variable), bright, beautiful displays in the 1980s with the vectrex.

cachius 2 hours ago | parent | prev [-]

2025 that is

bigfishrunning 3 hours ago | parent | prev [-]

That Ostrich Tho

cachius 2 hours ago | parent [-]

That Tires Tho

benatkin 2 hours ago | parent | prev | next [-]

I used the AI studio link and tried running it with the temperature set to 1.75: https://jsbin.com/locodaqovu/edit?html,output

saberience 3 hours ago | parent | prev [-]

I hope we keep beating this dead horse some more, I'm still not tired of it.

jeffybefffy519 a minute ago | parent | prev | next [-]

Someone needs to make an actual good benchmark for LLM's that matches real world expectations, theres more to benchmarks than accuracy against a dataset.

xrd 2 hours ago | parent | prev | next [-]

These models are so powerful.

It's totally possible to build entire software products in the fraction of the time it took before.

But, reading the comments here, the behaviors from one version to another point version (not major version mind you) seem very divergent.

It feels like we are now able to manage incredibly smart engineers for a month at the price of a good sushi dinner.

But it also feels like you have to be diligent about adopting new models (even same family and just point version updates) because they operate totally differently regardless of your prompt and agent files.

Imagine managing a team of software developers where every month it was an entirely new team with radically different personalities, career experiences and guiding principles. It would be chaos.

I suspect that older models will be deprecated quickly and unexpectedly, or, worse yet, will be swapped out with subtle different behavioral characteristics without notice. It'll be quicksand.

simonw an hour ago | parent | next [-]

I had an interesting experience recently where I ran Opus 4.6 against a problem that o4-mini had previously convinced me wasn't tractable... and Opus 4.6 found me a great solution. https://github.com/simonw/sqlite-chronicle/issues/20

This inspired me to point the latest models at a bunch of my older projects, resulting in a flurry of fixes and unblocks.

jauntywundrkind 11 minutes ago | parent | next [-]

From the project description here for your sqlite-chronicle project:

> Use triggers to track when rows in a SQLite table were updated or deleted

Just a note in case its interesting to anyone, sqlite compatible Turso database has CDC, a changes table! https://turso.tech/blog/introducing-change-data-capture-in-t...

petesergeant an hour ago | parent | prev [-]

I continue to get great value out of having claude and codex bound together in a loop: https://github.com/pjlsergeant/moarcode

apitman 36 minutes ago | parent [-]

They are one, the ring and the dark lord

jama211 an hour ago | parent | prev | next [-]

Yeah I keep maintaining a specific app I built with gpt 5.1 codex max with that exact model because it continues to work for the requests I send it, and attempts with other models even 5.2 or 5.3 codex seemed to have odd results. If I were superstitious I would say it’s almost like the model that wrote the code likes to work on the code better. Perhaps there’s something about the structure it created though that it finds easier to understand…

seizethecheese an hour ago | parent | prev | next [-]

> It feels like we are now able to manage incredibly smart engineers for a month at the price of a good sushi dinner.

In my experience it’s more like idiot savant engineers. Still remarkable.

WarmWash an hour ago | parent | prev | next [-]

I have long suspected that a large part of people's distaste for given models comes from their comfort with their daily driver.

Which I guess feeds back to prompting still being critical for getting the most out of a model (outside of subjective stylistic traits the models have in their outputs).

worldsavior an hour ago | parent | prev [-]

Sushy dinner? What are you building with AI, a calculator?

spankalee 2 hours ago | parent | prev | next [-]

I hope this works better than 3.0 Pro

I'm a former Googler and know some people near the team, so I mildly root for them to at least do well, but Gemini is consistently the most frustrating model I've used for development.

It's stunningly good at reasoning, design, and generating the raw code, but it just falls over a lot when actually trying to get things done, especially compared to Claude Opus.

Within VS Code Copilot Claude will have a good mix of thinking streams and responses to the user. Gemini will almost completely use thinking tokens, and then just do something but not tell you what it did. If you don't look at the thinking tokens you can't tell what happened, but the thinking token stream is crap. It's all "I'm now completely immersed in the problem...". Gemini also frequently gets twisted around, stuck in loops, and unable to make forward progress. It's bad at using tools and tries to edit files in weird ways instead of using the provided text editing tools. In Copilot it, won't stop and ask clarifying questions, though in Gemini CLI it will.

So I've tried to adopt a plan-in-Gemini, execute-in-Claude approach, but while I'm doing that I might as well just stay in Claude. The experience is just so much better.

For as much as I hear Google's pulling ahead, Anthropic seems to be to me, from a practical POV. I hope Googlers on Gemini are actually trying these things out in real projects, not just one-shotting a game and calling it a win.

karmasimida an hour ago | parent | next [-]

Gemini just doesn’t do even mildly well in agentic stuff and I don’t know why.

OpenAI has mostly caught up with Claude in agentic stuff, but Google needs to be there and be there quickly

onlyrealcuzzo 30 minutes ago | parent | next [-]

Because Search is not agentic.

Most of Gemini's users are Search converts doing extended-Search-like behaviors.

Agentic workflows are a VERY small percentage of all LLM usage at the moment. As that market becomes more important, Google will pour more resources into it.

Macha 15 minutes ago | parent [-]

> Agentic workflows are a VERY small percentage of all LLM usage at the moment. As that market becomes more important, Google will pour more resources into it.

I do wonder what percentage of revenue they are. I expect it's very outsized relative to usage (e.g. approximately nobody who is receiving them is paying for those summaries at the top of search results)

alphabetting 41 minutes ago | parent | prev | next [-]

the agentic benchmarks for 3.1 indicate Gemini has caught up. the gains are big from 3.0 to 3.1.

For example the APEX-Agents benchmark for long time horizon investment banking, consulting and legal work:

1. Gemini 3.1 Pro - 33.2% 2. Opus 4.6 - 29.8% 3. GPT 5.2 Codex - 27.6% 4. Gemini Flash 3.0 - 24.0% 5. GPT 5.2 - 23.0% 6. Gemini 3.0 Pro - 18.0%

ionwake 41 minutes ago | parent | prev [-]

Can you explain what you mean by its bad at agentic stuff?

karmasimida 28 minutes ago | parent [-]

Accomplish the task I give to it without fighting me with it.

I think this is classic precision/recall issue: the model needs to stay on task, but also infer what user might want but not explicitly stated. Gemini seems particularly bad that recall, where it goes out of bounds

Oras an hour ago | parent | prev | next [-]

Glad I’m not the only one who experienced this. I have a paid antigravity subscription and most of the time I use Claude models due to the exact issues you have pointed out.

s3p an hour ago | parent | prev | next [-]

Don't get me started on the thinking tokens. Since 2.5P the thinking has been insane. "I'm diving in to the problem", "I'm fully immersed" or "I'm meticulously crafting the answer"

foz 27 minutes ago | parent | next [-]

This is part of the reason I don't like to use it. I feel it's hiding things from me, compared to other models that very clearly share what they are thinking.

dist-epoch 37 minutes ago | parent | prev [-]

That's not the real thinking, it's a super summarized view of it.

knollimar 2 hours ago | parent | prev | next [-]

Is the thinking token stream obfuscated?

Im fully immersed

orbital-decay 2 hours ago | parent [-]

It's just a summary generated by a really tiny model. I guess it also an ad-hoc way to obfuscate it, yes. In particular they're hiding prompt injections they're dynamically adding sometimes. Actual CoT is hidden and entirely different from that summary. It's not very useful for you as a user, though (neither is the summary).

ukuina an hour ago | parent [-]

Agree the raw thought-stream is not useful.

It's likely filled with "Aha!" and "But wait!" statements.

slopinthebag 2 hours ago | parent | prev | next [-]

Hmm, interesting..

My workflow is to basically use it to explain new concepts, generate code snippets inline or fill out function bodies, etc. Not really generating code autonomously in a loop. Do you think it would excel at this?

jbellis 2 hours ago | parent | prev [-]

yeah, g3p is as smart or smarter as the other flagships but it's just not reliable enough, it will go into "thinking loops" and burn 10s of 1000s of tokens repeating itself.

https://blog.brokk.ai/gemini-3-pro-preview-not-quite-baked/

hopefully 3.1 is better.

nicce 42 minutes ago | parent [-]

> it will go into "thinking loops" and burn 10s of 1000s of tokens repeating itself.

Maybe it is just a genius business strategy.

minimaxir 3 hours ago | parent | prev | next [-]

Price is unchanged from Gemini 3 Pro: $2/M input, $12/M output. https://ai.google.dev/gemini-api/docs/pricing

Knowledge cutoff is unchanged at Jan 2025. Gemini 3.1 Pro supports "medium" thinking where Gemini 3 did not: https://ai.google.dev/gemini-api/docs/gemini-3

Compare to Opus 4.6's $5/M input, $25/M output. If Gemini 3.1 Pro does indeed have similar performance, the price difference is notable.

rancar2 2 hours ago | parent | next [-]

If we don't see a huge gain on the long-term horizon thinking reflected with the Vendor-Bench 2, I'm not going to switch away from CC. Until Google can beat Anthropic on that front, Claude Code paired with the top long-horizon models will continue to pull away with full stack optimizations at every layer.

jbellis 2 hours ago | parent | prev | next [-]

still no minimal reasoning in G3.1P :(

(this is why Opus 4.6 is worth the price -- turning off thinking makes it 3x-5x faster but it loses only a small amount of intelligence. nobody else has figured that out yet)

plaidfuji 2 hours ago | parent | prev [-]

Sounds like the update is mostly system prompt + changes to orchestration / tool use around the core model, if the knowledge cutoff is unchanged

brokencode an hour ago | parent | next [-]

This keeps getting repeated for all kinds of model releases, but isn’t necessarily true. It’s possible to make all kinds of changes without updating the pretraining data set. You can’t judge a model’s newness based on what it knows about.

sigmar 2 hours ago | parent | prev [-]

knowledge cutoff staying the same likely means they didn't do a new pre-train. We already knew there were plans from deepmind to integrate new RL changes in the post training of the weights. https://x.com/ankesh_anand/status/2002017859443233017

1024core 2 hours ago | parent | prev | next [-]

It got the car wash question perfectly:

You are definitely going to have to drive it there—unless you want to put it in neutral and push!

While 200 feet is a very short and easy walk, if you walk over there without your car, you won't have anything to wash once you arrive. The car needs to make the trip with you so it can get the soap and water.

Since it's basically right next door, it'll be the shortest drive of your life. Start it up, roll on over, and get it sparkling clean.

Would you like me to check the local weather forecast to make sure it's not going to rain right after you wash it?

1024core an hour ago | parent | next [-]

Some people are suggesting that this might actually be in the training set. Since I can't rule that out, I tried a different version of the question, with an elephant instead of a car:

> It's a hot and dusty day in Arizona and I need to wash my elephant. There's a creek 300 feet away. Should I ride my elephant there or should I just walk there by myself?

Gemini said:

That sounds like quite the dusty predicament! Given the options, you should absolutely ride (or lead) your elephant to the creek.

Here is the straightforward reality of the situation:

    The Logistical Nightmare of Walking Alone: If you walk to the creek by yourself, your elephant stays at your starting point. Elephants require a massive amount of water for a bath. Unless you are planning an extreme workout carrying hundreds of buckets of water 300 feet back through the Arizona heat, you're going to have a very bad time—and the elephant will stay dusty.

    The Perks of Going Together: 300 feet is a very short, easy stroll for an elephant. Plus, elephants generally love water and mud! Letting your elephant get right into the creek will make the washing process much easier for you and much more enjoyable for them. They can even use their trunk to help hose themselves down.
Grab some elephant-safe soap and head down to the water together!

Would you like some tips on the best methods and tools for scrubbing down an elephant?

kovek an hour ago | parent [-]

I think that semantically this question is too similar to the car wash one. Changing subjects from car to elephant and car wash to creek does not change the fact that they are subjects. The embeddings will be similar in that dimension.

1024core an hour ago | parent [-]

I understand. But isn't it a sign of "smarts" that one can generalize from analoguous tasks?

willis936 an hour ago | parent [-]

Sure it is, but it's a different set of smarts than the kind of gotcha logic puzzle trying to be tested with the car wash question.

embedding-shape an hour ago | parent | prev | next [-]

GPT-OSS-120b, a (downloadable) model released more than half a year ago also gets that right, I'm not sure this is such a great success.

> Would you like me to check the local weather forecast to make sure it's not going to rain right after you wash it?

Regardless of what I do, the days I decide to wash my car, it ALWAYS rains the day after, even if I look at the weather report and it says sunny. Cute that Gemini thinks it can predict it better than universe's will.

steve1977 2 hours ago | parent | prev | next [-]

The question of course is, did it get the car wash question right because it is "the car wash question" or because it could actually infer why the car needed to be there?

embedding-shape an hour ago | parent | next [-]

Wasn't that "twoot" (or whatever Mastodon calls them) made just a week ago? Unlikely to have been in the training dataset of a model becoming available for public use today, unless Google made some serious advancements on the training front.

jama211 an hour ago | parent | prev [-]

Shouldn’t be too hard to come up with a new unique reasoning question

buttered_toast an hour ago | parent | prev | next [-]

I think we need to reevaluate what purpose these sorts of questions serve and why they're important in regards to judging intelligence.

The model getting it correct or not at any given instance isn't the point, the point is if the model ever gets it wrong we can still assume that it still has some semblance of stochasticity in its output, given that a model is essentially static once it is released.

Additionally, hey don't learn post training (except for in context which I think counts as learning to some degree albeit transient), if hypothetically it answers incorrectly 1 in 50 attempts, and I explain in that 1 failed attempt why it is wrong, it will still be a 1-50 chance it gets it wrong in a new instance.

This differs from humans, say for example I give an average person the "what do you put in a toaster" trick and they fall for it, I can be pretty confident that if I try that trick again 10 years later they will probably not fall for it, you can't really say that for a given model.

energy123 44 minutes ago | parent [-]

They're important but not as N=1. It's like cherry picking a single question from SimpleQA and going aha! It got it right! Meanwhile it's 8% lower score than some other model when evaluated on all questions.

buttered_toast 36 minutes ago | parent [-]

Makes me wonder what people would consider better, a model that gets 92% of questions right 100% of the time, or a model that gets 95% of the questions right 90% of the time and 88% right the other 10%?

I think that's why benchmarking is so hard for me to fully get behind, even if we do it over say, 20 attempts and average it. For a given model, those 20 attempts could have had 5 incredible outcomes and 15 mediocre ones, whereas another model could have 20 consistently decent attempts and the average score would be generally the same.

We at least see variance in public benchmarks, but in the internal examples that's almost never the case.

upmind an hour ago | parent | prev | next [-]

The answer here is why I dislike Gemini, though it gets the correct answer, it's far too verbose.

patrickmcnamara an hour ago | parent | prev | next [-]

Truly we entering the era of AGI.

suddenlybananas 2 hours ago | parent | prev [-]

They probably had time to toss that example in the training soup.

AlphaAndOmega0 2 hours ago | parent [-]

Previous models from competitors usually got that correct, and the reasoning versions almost always did.

This kind of reflexive criticism isn't helpful, it's closer to a fully generalized counter-argument against LLM progress, whereas it's obvious to anyone that models today can do things they couldn't do six months ago, let alone 2 years back.

suddenlybananas an hour ago | parent [-]

I'm not denying any progress, I'm saying that reasoning failures that are simple which have gone viral are exactly the kind of thing that they will toss in the training data. Why wouldn't they? There's real reputational risks in not fixing it and no costs in fixing it.

mijoharas 2 hours ago | parent | prev | next [-]

Gemini 3 is still in preview (limited rate limits) and 2.5 is deprecated (still live but won't be for long).[0]

Are Google planning to put any of their models into production any time soon?

Also somewhat funny that some models are deprecated without a suggested alternative(gemini-2.5-flash-lite). Do they suggest people switch to Claude?

[0] https://ai.google.dev/gemini-api/docs/deprecations

andrewmutz 2 hours ago | parent | next [-]

I agree completely. I don't know how anyone can be building on these models when all of them are either deprecated or not actually released yet. As someone who has production systems running on the deprecated models, this situation really causes me grief.

NitpickLawyer 21 minutes ago | parent | prev | next [-]

You are reading your link wrong. They are deprecating 2.5-preview models. 2.5 (including lite) are up till at least sept/oct 26.

mijoharas 17 minutes ago | parent [-]

gemini-2.5-pro has a listed shutdown date of "June 17, 2026" in the linked table.

(Another commenter pointed out that this is the earliest shutdown date and it won't necessarily be shut down on that date).

Where are you getting sept/Oct from? I see gemini-2.5-flash-image in October, but everything else looks like June/July to me?

jakejohnson 2 hours ago | parent | prev | next [-]

I haven't seen any deprecation notices for 2.5 yet, just for 2. I'd expect (and hope) the deprecation timeline for 2.5 is longer since 3.0 is still in preview. Maybe they just default to 1 year here?

> Note: The shutdown dates listed in the table indicate the /earliest/ possible dates on which a model might be retired. We will communicate the exact shutdown date to users with advance notice to ensure a smooth transition to a replacement model.

mijoharas 23 minutes ago | parent [-]

I think you're right, it was 2 I think I saw explicitly deprecated, then searched again and saw 2.5 having a shutdown date.

This article[0] talks about 2 being deprecated.

It's still frustrating that they don't have proper production endpoints for 3.0 yet.

[0] https://ai.google.dev/gemini-api/docs/changelog

vidarh 2 hours ago | parent | prev [-]

This feels very Google

nickandbro 3 hours ago | parent | prev | next [-]

Does well on SVGs outside of "pelican riding on a bicycle" test. Like this prompt:

"create a svg of a unicorn playing xbox"

https://www.svgviewer.dev/s/NeKACuHj

Still some tweaks to the final result, but I am guessing with the ARC-AGI benchmark jumping so much, the model's visual abilities are allowing it to do this well.

simonw 3 hours ago | parent | next [-]

Interesting how it went a bit more 3D with the style of that one compared to the pelican I got.

mclau153 7 minutes ago | parent | prev | next [-]

can we move on from SVG to 3D models at some point?

andy12_ 3 hours ago | parent | prev | next [-]

I'm thinking now that as models get better and better at generating SVGs, there could be a point where we can use them to just make arbitrary UIs and interactive media with raw SVGs in realtime (like flash games).

rafark 5 minutes ago | parent | next [-]

> there could be a point where we can use them to just make arbitrary UIs and interactive media with raw SVGs

So render ui elements using xml-like code in a web browser? You’re not going to believe me when I tell you this…

nickandbro 3 hours ago | parent | prev [-]

Or quite literally a game where SVG assets are generated on the fly using this model

kridsdale3 an hour ago | parent [-]

Thats one dimension before another long term milestone: Realtime generation of 3D mesh content during gameplay.

Which is the "left brain" approach vs the "right brain" approach of coming at dynamic videogames from the diffusion model direction which the Gemini Genie thing seems to be about.

roryirvine 2 hours ago | parent | prev [-]

On the other hand, creation of other vector image formats (eg. "create a postscript file showing a walrus brushing its teeth") hasn't improved nearly so much.

Perhaps they're deliberately optimising for SVG generation.

takoid 3 hours ago | parent | prev | next [-]

Shared this in the other Gemini Pro 3.1 thread (https://news.ycombinator.com/item?id=47074735) but wanted to share it here as well.

I just tested the "generate an SVG of a pelican riding a bicycle" prompt and this is what I got: https://codepen.io/takoid/pen/wBWLOKj

The model thought for over 5 minutes to produce this. It's not quite photorealistic (some parts are definitely "off"), but this is definitely a significant leap in complexity.

onionisafruit 2 hours ago | parent [-]

Good to see it wearing a helmet. Their safety team must be on their game.

BrokenCogs 2 hours ago | parent [-]

Yes but why would a pelican need a helmet? If it falls over it can just fly away... Common sense 1 Gemini 0

WarmWash 2 hours ago | parent | prev | next [-]

3.1 Pro is the first model to correctly count the number of legs on my "five legged dog" test image. 3.0 flash was the previous best, getting it after a few prompts of poking. 3.1 got it on the first prompt though, with the prompt being "How many legs does the dog have? Count Carefully".

However, it didn't get it on the first try with the original prompt (prompt: "How many legs does the dog have?"). It initially said 4, then with a follow up prompt got it to hesitantly say 5, with one limb must being obfuscated or hidden.

So maybe I'll give it a 90%?

This is without tools as well.

merlindru 2 hours ago | parent [-]

your question may have become part of the training data with how much coverage there was around it. perhaps you should devise a new test :P

devsda 25 minutes ago | parent | next [-]

I suggest asking it to identify/count the number of fire hydrants, crosswalks, bridges, bicycles, cars, buses and traffic signals etc.

Pit Google against Google :D

iamdelirium an hour ago | parent | prev | next [-]

3.1 Pro has the same Jan 2025 knowledge cutoff as the other 3 series models. So if 3.1 has it in its training data, the other ones would have as well.

gallerdude 2 hours ago | parent | prev | next [-]

My job may have become part of the training data with how much coverage there is around it. Perhaps another career would be a better test of LLM capabilities.

suddenlybananas 2 hours ago | parent [-]

Have you ever heard of a black swan?

WarmWash 2 hours ago | parent | prev | next [-]

Honestly at this point I have fed this image in so many times on so many models, that it also functions as a test for "Are they training on my image specifically" (they are generally, for sure, but that's along with everything else in the ocean of info people dump in).

I genuinely don't think they are. GPT-5.2 still stands by 4 legs, and OAI has been getting this image consistently for over a year. And 3.1 still fumbled with the harder prompt "How many legs does the dog have?". I needed to add the "count carefully" part to tip it off that something was amiss.

Since it did well I'll make some other "extremely far out of the norm" images to see how it fairs. A spider with 10 legs or a fish with two side fins.

wat10000 2 hours ago | parent | prev [-]

Easy fix, make a new test image with six legs, and watch all the LLMs say it has five.

Robdel12 3 hours ago | parent | prev | next [-]

I really want to use google’s models but they have the classic Google product problem that we all like to complain about.

I am legit scared to login and use Gemini CLI because the last time I thought I was using my “free” account allowance via Google workspace. Ended up spending $10 before realizing it was API billing and the UI was so hard to figure out I gave up. I’m sure I can spend 20-40 more mins to sort this out, but ugh, I don’t want to.

With alllll that said.. is Gemini 3.1 more agentic now? That’s usually where it failed. Very smart and capable models, but hard to apply them? Just me?

alpineman 3 hours ago | parent | next [-]

100% agreed. I wish someone would make a test for how reliably the LLMs follow tool use instructions etc. The pelicans are nice but not useful for me to judge how well a model will slot into a production stack.

embedding-shape 2 hours ago | parent [-]

At first when I got started with using LLMs I read/analyzed benchmarks, looked at what example prompts people used and so on, but many times, a new model does best at the benchmark, and you think it'll be better, but then in real work, it completely drops the ball. Since then I've stopped even reading benchmarks, I don't care an iota about them, they always seem more misdirected than helpful.

Today I have my own private benchmarks, with tests I run myself, with private test cases I refuse to share publicly. These have been built up during the last 1/1.5 years, whenever I find something that my current model struggles with, then it becomes a new test case to include in the benchmark.

Nowadays it's as easy as `just bench $provider $model` and it runs my benchmarks against it, and I get a score that actually reflects what I use the models for, and it feels like it more or less matches with actually using the models. I recommend people who use LLMs for serious work to try the same approach, and stop relying on public benchmarks that (seemingly) are all gamed by now.

cdelsolar 2 hours ago | parent [-]

share

embedding-shape 2 hours ago | parent [-]

The harness? Trivial to build yourself, ask your LLM for help, it's ~1000 LOC you could hack together in 10-15 minutes.

As for the test cases themselves, that would obviously defeat the purpose, so no :)

abiraja an hour ago | parent | prev | next [-]

I've been using it lately with OpenCode and it's working pretty well (except for API reliability issues).

phamilton 2 hours ago | parent | prev | next [-]

> For those building with a mix of bash and custom tools, Gemini 3.1 Pro Preview comes with a separate endpoint available via the API called gemini-3.1-pro-preview-customtools. This endpoint is better at prioritizing your custom tools (for example view_file or search_code).

It sounds like there was at least a deliberate attempt to improve it.

pdntspa 3 hours ago | parent | prev | next [-]

You can delete the billing from a given API key

surgical_fire 2 hours ago | parent | prev | next [-]

May be very silly of me, but I avoid using Gemini on my personal Google account. I use it at work, because my employer provides it.

I am scared some automated system may just decide I am doing something bad and terminate my account. I have been moving important things to Proton, but there are some stuff that I couldn't change that would cause me a lot of annoyance. It's not trivial to set up an alternative account just for Gemini, because my Google account is basically on every device I use.

I mostly use LLMs as coding assistant, learning assistant, and general queries (e.g.: It helped me set up a server for self hosting), so nothing weird.

CamperBob2 an hour ago | parent [-]

For what it's worth, there was an (unfortunately unsuccessful) HN submission from a guy who got his Gemini account banned, apparently without losing his whole Google account: https://news.ycombinator.com/item?id=47007906

horsawlarway 3 hours ago | parent | prev | next [-]

So much this.

It's absolutely amazing how hostile Google is to releasing billing options that are reasonable, controllable, or even fucking understandable.

I want to do relatively simple things like:

1. Buy shit from you

2. For a controllable amount (ex - let me pick a limit on costs)

3. Without spending literally HOURS trying to understand 17 different fucking products, all overlapping, with myriad project configs, api keys that should work, then don't actually work, even though the billing links to the same damn api key page, and says it should work.

And frankly - you can't do any of it. No controls (at best delayed alerts). No clear access. No real product differentiation pages. No guides or onboarding pages to simplify the matter. No support. SHIT LOADS of completely incorrect and outdated docs, that link to dead pages, or say incorrect things.

So I won't buy shit from them. Period.

sciencejerk 2 hours ago | parent [-]

You think AWS is better?

3form 2 hours ago | parent | next [-]

Exact reason I used none of these platforms for my personal projects, ever.

pdimitar an hour ago | parent | prev [-]

Who is comparing to AWS and why? They can both be terrible at the same time, you know.

Stevvo 2 hours ago | parent | prev | next [-]

You could always use it through Copilot. The credits based billing is pretty simple without surprise charges.

himata4113 3 hours ago | parent | prev [-]

use openrouter instead

Robdel12 44 minutes ago | parent [-]

This is actually an excellent idea, I’ll give this a shot tonight!

davidguetta 2 hours ago | parent | prev | next [-]

Implementation and Sustainability Hardware: Gemini 3 Pro was trained using Google’s Tensor Processing Units (TPUs). TPUs are specically designed to handle the massive computations involved in training LLMs and can speed up training considerably compared to CPUs. TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training, which can lead to better model quality. TPU Pods (large clusters of TPUs) also provide a scalable solution for handling the growing complexity of large foundation models. Training can be distributed across multiple TPU devices for faster and more efficient processing.

So google doesn't use NVIDIA GPUs at all ?

dekhn 2 hours ago | parent | next [-]

When I worked there, there was a mix of training on nvidia GPUs (especially for sparse problems when TPUs weren't as capable), CPUs, and TPUs. I've been gone for a few years but I've heard a few anecdotal statements that some of their researchers have to use nvidia GPUs because the TPUs are busy.

PunchTornado 2 hours ago | parent | prev | next [-]

no. only tpus

paride5745 2 hours ago | parent | prev | next [-]

Another reason to use Gemini then.

Less impact on gamers…

TiredOfLife 2 hours ago | parent [-]

TPUs still use ram and chip production capacity

lejalv 2 hours ago | parent | prev [-]

Bla bla bla yada sustainability yada often come with large better growing faster...

It's such an uninformative piece of marketing crap

the_duke 3 hours ago | parent | prev | next [-]

Gemini 3 is pretty good, even Flash is very smart for certain things, and fast!

BUT it is not good at all at tool calling and agentic workflows, especially compared to the recent two mini-generations of models (Codex 5.2/5.3, the last two versions of Anthropic models), and also fell behind a bit in reasoning.

I hope they manage to improve things on that front, because then Flash would be great for many tasks.

chermi 3 hours ago | parent | next [-]

You can really notice the tool use problems. They gotta get on that. The agent trend seems real, and powerful. They can't afford to fall behind on it.

verdverm 2 hours ago | parent [-]

I don't really have tool usage issues that I don't put under that doesn't follow system prompt instructions consistently

there are these times where it puts a prefix on all function calls, which is weird and I think hallucination, so maybe that one

3.1 hopefully fixes that

anthonypasq 3 hours ago | parent | prev | next [-]

yeah, it seems to me like Gemini is a little behind on the current RL patterns and also they dont seem interested in really creating a dedicated coding model. I think they have so much product surface (search, AI mode, gmail, youtube, chrome etc), they are prioritizing making the model very general. but who knows im just talking out of my ass.

verdverm 2 hours ago | parent | prev | next [-]

These improvements are one of the things specifically called out on the submitted page

spwa4 3 hours ago | parent | prev [-]

In other words: they just need to motivate their employees while giving in to finance's demands to fire a few thousand every month or so ...

And don't forget, it's not just direct motivation. You can make yourself indispensable by sabotaging or at least not contributing to your colleagues' efforts. Not helping anyone, by the way, is exactly what your managers want you to do. They will decide what happens, thank you very much, and doing anything outside of your org ... well there's a name for that, isn't there? Betrayal, or perhaps death penalty.

tenpoundhammer 2 hours ago | parent | prev | next [-]

In an attempt to get outside of benchmark gaming I had it make Platypus on a Tricycle. It's not as good as pelican on bicycle. https://www.svgviewer.dev/s/BiRht5hX

textlapse 2 hours ago | parent | next [-]

To really confuse it, ask it to take that tricycle with the platypus on it to a car wash.

dinosor 2 hours ago | parent | prev | next [-]

For a moment I assumed the output would look like Perry the Platipus from the Disney (I think?) show. It's suprising to me (as a layman) that a show with lots of media that would've made it to the training corpus didn't show up.

0_____0 2 hours ago | parent | prev [-]

that's better than i thought it would be

robviren 17 minutes ago | parent | prev | next [-]

I have run into a surprising number of basic syntax errors on this one. At least in the few runs I have tried it's a swing and a miss. Wonder if the pressure of the Claude release is pushing these stop gap releases.

fdefitte 27 minutes ago | parent | prev | next [-]

The benchmark jumps are impressive but the real question is whether Gemini can stop being so aggressively helpful. Every time I use it for coding it refactors stuff I didn't ask it to touch. Claude has the opposite problem where it sometimes does too little. Feels like nobody has nailed the "do exactly what I asked, nothing more" sweet spot yet.

XCSme an hour ago | parent | prev | next [-]

Gets 10/10 on my potato benchmarks: https://aibenchy.com/model/google-gemini-3-1-pro-preview-med...

XCSme 42 minutes ago | parent [-]

Now I need to write more tests.

It's a bit hard to trick reasoning models, because they explore a lot of the angles of a problem, and they might accidentally have an "a-ha" moment that leads them on the right path. It's a bit like doing random sampling and stumbling upon the right result after doing gradient descent from those points.

0xcb0 43 minutes ago | parent | prev | next [-]

I'm trying to find the information, is this available on the Gemini CLI script, or is this just the web front-end where I can use this new model?

veselin an hour ago | parent | prev | next [-]

I am actually going to complain about this: that neither of the Gemini models are not preview ones.

Anthropic seems the best in this. Everything is in the API on day one. OpenAI tend to want to ask you for subscription, but the API gets there a week or a few later. Now, Gemini 3 is not for production use and this is already the previous iteration. So, does Google even intent to release this model?

timabdulla 2 hours ago | parent | prev | next [-]

Google tends to trumpet preview models that aren't actually production-grade. For instance, both 3 Pro and Flash suffer from looping and tool-calling issues.

I would love for them to eliminate these issues because just touting benchmark scores isn't enough.

ArmandoAP 3 hours ago | parent | prev | next [-]

Model Card https://storage.googleapis.com/deepmind-media/Model-Cards/Ge...

dxbednarczyk 3 hours ago | parent | prev | next [-]

Every time I've used Gemini models for anything besides code or agentic work they lean so far into the RLHF induced bold lettering and bullet point list barf that everything they output reads as if the model was talking _at_ me and not _with_ me. In my Openclaw experiment(s) and in the Gemini web UI, I've specifically added instructions to avoid this type of behavior, but it only seemed to obey those rules when I reminded the model of them.

For conversational contexts, I don't think the (in some cases significantly) better benchmark results compared to a model like Sonnet 4.6 can convince me to switch to Gemini 3.1. Has anyone else had a similar experience, or is this just a me issue?

augusto-moura 3 hours ago | parent | next [-]

Gemini sounds less personal, but I think that is good. From my experience, the quality of response is much higher than ChatGPT or Grok, and it cites real sources. I want to have a mini-wikipedia response for my questions, not a friend's group chat response

gavinray 3 hours ago | parent | prev | next [-]

I have the opposite viewpoint:

If a model doesn't optimize the formatting of its output display for readability, I don't want to read it.

Tables, embedded images, use of bulleted lists and bold/italicizing etc.

losvedir an hour ago | parent | prev | next [-]

It definitely has the worst "voice" in my opinion. Feels very overachieving McKinsey intern to me.

staticman2 2 hours ago | parent | prev | next [-]

I'm not familiar with Openclaw and but the trick to solve this would be to embed a style reminder at the bottom of each user message and ideally hide that from the user with the UI.

This is how roleplay apps like Sillytavern customize the experience for power users by allowing hidden style reminders as part of the user message that accompany each chat message.

InkCanon 3 hours ago | parent | prev | next [-]

I think they all output that bold lettering, point by point style output. I strongly suspect it's part of a synthetic data pipeline all these AI companies have, and it improves performance. Claude seems to be the least of them, but it will start writing code at the drop of a hat. What annoys me in Gemini is that it has a really strange tendency to come up with weird analogies, especially in Pro mode. You'll be asking it about something like red black trees and it'll say "Red Black Trees (The F1 of Tree Data Structures)".

hydrolox an hour ago | parent [-]

Yes, the analogy habit is the most annoying of all. Overall formatting for me is doable, if it didn't divide up an answer into these silly arbitrary categories with useless analogies. I've tried adding in my user preferences to never use analogies but it inevitably falls back into that habit.

markab21 3 hours ago | parent | prev | next [-]

You just articulated why I struggle to personally connect with Gemini. It feels so unrelatable and exhausting to read its output. I prefer to read Opus/Deepseek/GLM over Gemini, Qwen and the open source GPT models. Maybe it is RLHF that is creating my distaste from using it. (I pay for Gemini; I should be using it more... but the outputs just bug me and feel more work to get actionable insight.)

verdverm 2 hours ago | parent | prev [-]

I have no issues adjusting gemini tone & style with system prompt content

upmind an hour ago | parent | prev | next [-]

In my experience, while Gemini does really well in benchmarks I find it much worse when I actually use the model. It's too verbose / doesn't follow instructions very well. Let's see if that changes with this model.

solarisos an hour ago | parent | prev | next [-]

The speed of these 3.1 and Preview releases is starting to feel like the early days of web frameworks. It’s becoming less about the raw benchmarks and more about which model handles long-context 'hallucination' well enough to be actually used in a production pipeline without constant babysitting.

panarchy an hour ago | parent | prev | next [-]

I had it make a simple HTML/JS canvas game (think flappy bird) and while it did some things mildly better (and others noticeably worse) it still fell into the exact same traps as earlier models. It also had a lot of issues generating valid JS at parts and asking it what the code should be just made it endlessly generate the same exact incorrect code.

josalhor 3 hours ago | parent | prev | next [-]

I speculated that 3 pro was 3.1... I guess I was wrong. Super impressive numbers here. Good job Google.

refulgentis 3 hours ago | parent [-]

> I speculated that 3 pro was 3.1

?

josalhor 2 hours ago | parent [-]

Sorry... I speculated that 3 deep think is 3.1 pro.. model names are confusing..

pawelduda 3 hours ago | parent | prev | next [-]

It's safe to assume they'll be releasing improved Gemini Flash soon? The current one is so good & fast I rarely switch to pro anymore

derac 2 hours ago | parent | next [-]

When 3 came out they mentioned that flash included many improvements that didn't make it into pro (via an hn comment). I imagine this release includes those.

tucnak an hour ago | parent | prev [-]

Gemini 3 Pro (high) is a joke compared to Gemini 3 Flash in Antigravity, except it's not even funny. Flash is insane value, and super capable, too. I've had it implement a decompiler for very obscure bytecode, and it was passing all tests in no time. PITA to refactor later, but not insurmountable. Gemini 3 Pro (high) choked on this problem in the early stages... I'm looking forward to comparing 3.1 Pro vs 3.0 Flash, hopefully they have improved on it enough to finally switch over.

janalsncm 3 hours ago | parent | prev | next [-]

This model says it accepts video inputs. I asked it to transcribe a 5 second video of a digital water curtain which spelled “Boo Happy Halloween”, and it came back with “Happy” which wasn’t the first frame, but also is incomplete.

This kind of test is good because it requires stitching together info from the whole video.

aabhay 2 hours ago | parent [-]

It reads videos at 1fps by default. You have to set the video resolution to high in ai studio

mixel 3 hours ago | parent | prev | next [-]

Google seems to really pull ahead in this AI race. For me personally they offer the best deal and although the software is not quiet there compared to openai or anthropic (in regards to 1. web GUI, 2. agent-cli). I hope they can fix that in the future and I think once Gemini 4 or whatever launches we will see a huge leap again

rubslopes 2 hours ago | parent | next [-]

I don't understand this sentiment. It may hold true for other LLM use cases (image generation, creative writing, summarizing large texts), but when it comes to coding specifically, Google is *always* behind OpenAI and Anthropic, despite having virtually infinite processing power, money, and being the ones who started this race in the first place.

Until now, I've only ever used Gemini for coding tests. As long as I have access to GPT models or Sonnet/Opus, I never want to use Gemini. Hell, I even prefer Kimi 2.5 over it. I tried it again last week (Gemini Pro 3.0) and, right at the start of the conversation, it made the same mistake it's been making for years: it said "let me just run this command," and then did nothing.

My sentiment is actually the opposite of yours: how is Google *not* winning this race?

hobofan 2 hours ago | parent [-]

> despite having virtually infinite processing power, money

Just because they have the money doesn't mean that they spend it excessively. OpenAI and Anthropic are both offering coding plans that are possibly severely subsidized, as they are more concerned with growth at all cost, while Google is more concerned with profitability. Google has the bigger warchest and could just wait until the other two run out of money rather than forcing the growth on that product line in unprofitable means.

Maybe they are also running much closer to their compute limits then the other ones too and their TPUs are already saturated with API usage.

jeanloolz 11 minutes ago | parent [-]

Agreed, also worth pointing out that Google still owns 14% of Anthropic + Anthropic is signing billion dollar scale deals with Google Cloud to train their models on their TPUs. So Claude success indirectly contributes to Google success. The AI race is not only about the frontier models.

eknkc 2 hours ago | parent | prev [-]

I hope they fail.

I honestly do not wish Google to have the best model out there and be forced to use their incomprehensible subscription / billing / project management whatever shit ever again.

I don’t know what their stuff cost. I don’t know why would I use vertex or ai studio. What is included in my subscription what is billed per use.

I pray that whatever they build fails and burns.

otherme123 2 hours ago | parent | next [-]

They all suck. OpenAI ignores scanning limits and disabled routes in robots.txt, after a 429 "Too Many Requests" they retry the same url half a dozen of times from different IPs in the next couple of minutes, and they once DoS'ed my small VPS trying to do a full scan of sitemaps.xml in less than one hour, trying and retrying if any endpoint failed.

Google and others at least respects both robots.txt and 429s. They invested years scanning all the internet, so they can now train on what they have stored in their server. OpenAI seems to assume that MY resources are theirs.

toraway 2 hours ago | parent | prev | next [-]

For a personal plan to use premium Gemini AI features or for agentic development with Gemini CLI/Antigravity the billing is no more or less complicated then Claude Code or Codex CLI.

You pay for the $20/mo Google AI Pro plan with a credit card via the normal personal billing flow like you would for a Google One plan without any involvement of Google Cloud billing or AI Studio. Authorize in the client with your account and you're good to go.

(With the bundled drive storage on AI Pro I'm just paying a few bucks more than I was before so for me it's my least expensive AI subscription excluding the Z.ai ultra cheap plan).

Or, just like with Anthropic or OpenAI, it's a separate process for billing/credits for an API key targeted at a developer audience. Which I don't need or use for Gemini CLI or Antigravity at all, it's a one step "click link to authorize with your Google Account" and done.

You could decide to use an API key for usage based billing instead (just like you could with Claude Code) but that's entirely unnecessary with a subscription.

Sure, for the API anything involving a hyperscalar cloud is going to have a higher complexity floor with legacy cruft here and there, but for individual subscriptions that's irrelevant and it's pretty much as straightforward of a click and pay flow you'd find anywhere else.

dybber 2 hours ago | parent | prev [-]

Eventually the models will be generally be so good that the competition moves from the best model to the best user experience and here I think we can expect others will win, e.g. Microsoft with GitHub and VS Code

eknkc 2 hours ago | parent [-]

That's my hope but Google has unlimited cash to throw at model development and can basically burn more cash can openai and anthropic combined. Might tip the scale in the long run.

Murfalo 2 hours ago | parent | prev | next [-]

I like to think that all these pelican riding a bicycle comments are unwittingly iteratively creating the optimal cyclist pelican as these comment threads are inevitably incorporated in every training set.

alpineman 2 hours ago | parent [-]

More like half of Google's AI team is hanging out on HN, and they can optimise for that outcome to get a good rep among the dev community.

kridsdale3 42 minutes ago | parent | next [-]

Hello.

(I'm not aware of anyone doing this, but GDM is quite info-siloed these days, so my lack of knowledge is not evidence it's not happening)

Barbing 2 hours ago | parent | prev [-]

See: fish in bike front basket

BMFXX 36 minutes ago | parent | prev | next [-]

Just wish iI could get 2.5 daily limit above 1000 requests easily. Driving me insane...

pRusya 2 hours ago | parent | prev | next [-]

I'm using gemini.google.com/app with AI Pro subscription. "Something went wrong" in FF, works in Chrome.

Below is one of my test prompts that previous Gemini models were failing. 3.1 Pro did a decent job this time.

> use c++, sdl3. use SDL_AppInit, SDL_AppEvent, SDL_AppIterate callback functions. use SDL_main instead of the default main function. make a basic hello world app.

syspec an hour ago | parent | prev | next [-]

Does anyone know if this is in GA immediately or if it is in preview?

On our end, Gemini 3.0 Preview was very flakey (not model quality, but as in the API responses sometimes errored out), making it unreliable.

Does this mean that 3.0 is now GA at least?

markerbrod 3 hours ago | parent | prev | next [-]

Blogpost: https://blog.google/innovation-and-ai/models-and-research/ge...

impulser_ 3 hours ago | parent | prev | next [-]

Seems like they actually fixed some of the problems with the model. Hallucinations rate seems to be much better. Seems like they also tuned the reasoning maybe that were they got most of the improvements from.

whynotminot 2 hours ago | parent [-]

The hallucination rate with the Gemini family has always been my problem with them. Over the last year they’ve made a lot of progress catching the Gemini models up to/near the frontier in general capability and intelligence, but they still felt very late 2024 in terms of hallucination rate.

Which made the Gemini models untrustworthy for anything remotely serious, at least in my eyes. If they’ve fixed this or at least significantly improved, that would be a big deal.

SubiculumCode an hour ago | parent [-]

Maybe I haven't kept up with how ghatgpt and claude are doing , but 6 monthlatelys ago or so, I thought Gemini was leading on that front.

onlyrealcuzzo 3 hours ago | parent | prev | next [-]

We've gone from yearly releases to quarterly releases.

If the pace of releases continues to accelerate - by mid 2027 or 2028 we're headed to weekly releases.

rubicon33 3 hours ago | parent [-]

But actual progress seems to be slower. These modes are releasing more often but aren’t big leaps.

gallerdude 2 hours ago | parent | next [-]

We used to get one annual release which was 2x as good, now we get quarterly releases which are 25% better. So annually, we’re now at 2.4x better.

minimaxir 2 hours ago | parent | prev | next [-]

Due to the increasing difficulty of scaling up training, it appears the gains are instead being achieved through better model training which appears to be working well for everyone.

wahnfrieden 2 hours ago | parent | prev [-]

GPT 5.3 (/Codex) was a huge leap over 5.2 for coding

azuanrb 2 hours ago | parent | prev | next [-]

The CLI needs work, or they should officially allow third-party harnesses. Right now, the CLI experience is noticeably behind other SOTA models. It actually works much better when paired with Opencode.

But with accounts reportedly being banned over ToS issues, similar to Claude Code, it feels risky to rely on it in a serious workflow.

yuvalmer an hour ago | parent | prev | next [-]

Gemini 3.0 Pro is bad model for its class. I really hope 3.1 is a leap forward.

zokier 3 hours ago | parent | prev | next [-]

> Last week, we released a major update to Gemini 3 Deep Think to solve modern challenges across science, research and engineering. Today, we’re releasing the upgraded core intelligence that makes those breakthroughs possible: Gemini 3.1 Pro.

So this is same but not same as Gemini 3 Deep Think? Keeping track of these different releases is getting pretty ridiculous.

WarmWash 2 hours ago | parent | next [-]

Deep Think is a few 3.1 models working together. It was suspected last week that Deep Think was composed using the new 3.1 model.

verdverm 2 hours ago | parent | prev [-]

3.1 == model

deep think == turning up thinking knob (I think)

deep research == agent w/ search

hsaliak 3 hours ago | parent | prev | next [-]

The eventual nerfing gives me pause. Flash is awesome. What we really want is gemini-3.1-flash :)

makeavish 3 hours ago | parent | prev | next [-]

Great model until it gets nerfed. I wish they had a higher paid tier to use non nerfed model.

Mond_ 2 hours ago | parent | next [-]

Bad news, John Google told me they already quantized it immediately after the benchmarks were done and it sucks now.

I miss when Gemini 3.1 was good. :(

spyckie2 3 hours ago | parent | prev | next [-]

I think there is a pattern it will always be nerfed the few weeks before launching a new model. Probably because they are throwing a bunch of compute at the new model.

makeavish 2 hours ago | parent [-]

Yeah maybe that but atleast let us know about this Or have dynamic limits? Nerfing breaks trust. Though I am not sure if they actually nerf it intentionally. Haven't heard from any credible source. I did experience in my workflow though.

xnx 3 hours ago | parent | prev [-]

What are you talking about?

quacky_batak 3 hours ago | parent | prev | next [-]

I’m keen to know how and where are you using Gemini.

Anthropic is clearly targeted to developers and OpenAI is general go to AI model. Who are the target demographic for Gemini models? ik that they are good and Flash is super impressive. but i’m curious

jdc0589 3 hours ago | parent | next [-]

I use it as my main platform right now both for work/swe stuff, and person stuff. It works pretty well, they have the full suite of tools I want from general LLM chat, to notebookLM, to antigravity.

My main use-cases outside of SWE generally involve the ability to compare detailed product specs and come up with answers/comparisons/etc... Gemini does really well for that, probably because of the deeper google search index integration.

Also I got a year of pro for free with my phone....so thats a big part.

ggregoire an hour ago | parent | prev | next [-]

I use it in Google Search. For example yesterday I typed in Google "postgres generate series 24 hour" and this morning "ffmpeg convert mp4 to wav". Previously I would have clicked on the first StackOverflow result (RIP), now I just take it from the Gemini summary (I'd say 95% of the time it's correct for basic programming language questions. I remember some hallucinations about psycopg3 and date-fns tho. As usual with AI, you need to already know the answer, at least partially, to detect the bs).

Also what's great about Gemini in Google Search is that the answer comes with several links, I use them sometimes to validate the correctness of the solution, or check how old the solution is (I've never used chatGPT so I don't know if chatGPT does it).

hunta2097 3 hours ago | parent | prev | next [-]

I use the Gemini web interface just as I would ChatGPT. They also have coding environment analogues of Claude-Code in Anti-gravity and Gemini-CLI.

When you sign up for the pro tier you also get 2TB of storage, Gemini for workspace and Nest Camera history.

If you're in the Google sphere it offers good value for money.

fatherwavelet 2 hours ago | parent | prev | next [-]

I feel like Gemini 3 was incredible on non-software/coding research. I have learned so much systems biology the last two months it blows my mind.

I had only started using Opus 4.6 this week. Sonnet it seems like is much better at having a long conversation with. Gemini is good for knowledge retrieval but I think Opus 4.6 has caught up. The biggest thing that made Gemini worth it for me the last 3 months is I crushed it with questions. I wouldn't have even got 10% of the Opus use that I got from Gemini before being made to slow down.

I have a deep research going right now on 3.1 for the first time and I honestly have no idea how I am going to tell if it is better than 3.

It seems like agentic coding Gemini wasn't as good but just asking it to write a function, I think it only didn't one shot what I asked it twice. Then fixed the problem on the next prompt.

I haven't logged in to bother with chatGPT in about 3 months now.

minimaxir 3 hours ago | parent | prev | next [-]

Gemini has an obvious edge over its competitors in one specific area: Google Search. The other LLMs do have a Web Search tool but none of them are as effective.

jug 3 hours ago | parent | prev | next [-]

I personally use it as my general purpose and coding model. It's good enough for my coding tasks most of the time, has very good and rapid web search grounding that makes the Google index almost feel like part of its training set, and Google has a family sharing plan with individual quotas for Google AI Pro at $20/month for 5 users which also includes 2 TB in the cloud. Family sharing is a unique feature for Gemini 3 Flash Thinking (300 prompts per day and user) & Pro (100 prompts per day and user).

dinosor 3 hours ago | parent | prev | next [-]

I find gemini to be the best at travel planning and for story telling of geographical places. For a road trip, I tried all three mainstream providers and I liked Gemini (also personal preference because Gemini took a verbose approach instead of bullet points from others) for it's responses, ways it discovered stories about places I wanted to explore, places it suggested for me and things it gave me to consider those places in the route.

thornewolf 2 hours ago | parent | prev | next [-]

I have swapped to using gemini over chatgpt for casual conversation and question answering. there are some lacking features in the app but i get faster and more intelligent responses.

dekhn 2 hours ago | parent | prev | next [-]

I am a professional software developer who has been programming for 40 years (C, C++, Python, assembly, any number of other languages). I work in ML (infrastructure, not research) and spent a decade working at Google.

In short, I consider Gemini to be a highly capable intern (grad student level) who is smarter and more tenacious than me, but also needs significant guidance to reach a useful goal.

I used Gemini to completely replace the software stack I wrote for my self-built microscope. That includes:

writing a brand new ESP32 console application for controlling all the pins of my ESP32 that drives the LED illuminator. It wrote the entire ESP-IDF project and did not make any major errors. I had to guide with updated prompts a few times but otherwise it wrote the entire project from scratch and ran all the build commands, fixing errors along the way. It also easily made a Python shared library so I can just import this object in my Python code. It saved me ~2-3 days of working through all the ESP-IDF details, and did a better job than I would have.

writing a brand new C++-based Qt camera interface (I have a camera with a special SDK that allows controlling strobe and trigger and other details. It can do 500FPS). It handled all the concurrency and message passing details. I just gave it the SDK PDF documentation for the camera (in mixed english/chinese), and asked it to generate an entire project. I had to spend some time guiding it around making shared libraries but otherwise it wrote the entire project from scratch and I was able to use it to make a GUI to control the camera settings with no additional effort. It ran all the build commands and fixed errors along the way. Saved me another 2-3 days and did a better job than I could have.

Finally, I had it rewrite the entire microscope stack (python with qt) using the two drivers I described above- along with complex functionality like compositing multiple images during scanning, video recording during scanning, mesaurement tools, computer vision support, and a number of other features. This involved a lot more testing on my part, and updating prompts to guide it towards my intended destination (fully functional replacement of my original self-written prototype). When I inspect the code, it definitely did a good job on some parts, while it came up with non-ideal solutions for some problems (for example, it does polling when it could use event-driven callbacks). This saved literally weeks worth of work that would have been a very tedious slog.

From my perspective, it's worked extremely well: doing what I wanted in less time than it would take me (I am a bit of a slow programmer, and I'm doing this in hobby time) and doing a better job (With appropriate guidance) than I could have (even if I'd had a lot of time to work on it). This greatly enhances my enjoyment of my hobby by doing tedious work, allowing me to spend more time on the interesting problems (tracking tardigrades across a petri dish for hours at a time). I used gemini pro 3 for this- it seems to do better than 2.5, and flash seemed to get stuck and loop more quickly.

I have only lightly used other tools, such as ChatGPT/Codex and have never used Claude. I tend to stick to the Google ecosystem for several reasons- but mainly, I think they will end up exceeding the capabilities of their competitors, due to their inherent engineering talent and huge computational resources. But they clearly need to catch up in a lot of areas- for example, the VS Code Gemini extension has serious problems (frequent API call errors, messed up formatting of code/text, infinite loops, etc).

aberoham 2 hours ago | parent [-]

Wow, you have to try claude code with Opus-4.6..

dekhn an hour ago | parent [-]

I agree, but I don't have a subscription.

The remaining technical challenge I have is related to stage positioning- in my system, it's important that all the image frames we collect are tagged with the correct positions. Due to some technical challenges, right now the stage positions are slightly out of sync with the frames, which will be a fairly tricky problem to solve. It's certainly worth trying all the major systems to see what they propose.

epolanski 3 hours ago | parent | prev | next [-]

Various friends of mine work in non-technology companies (banking, industries, legal, Italy) and in pretty much all of them there's Gemini enterprise + NotebookLM.

In all of them the approach is: this is the solution, now find problems you can apply it to.

mehagar 2 hours ago | parent | prev | next [-]

I use Gemini for personal stuff such as travel planning and research on how to fix something, which product to buy, etc. My company has as Pro subscription so I use that instead of ChatGPT.

esafak 3 hours ago | parent | prev | next [-]

I'd use it for planning, knowledge, and anything visual.

verdverm 2 hours ago | parent | prev [-]

I use gemini for everything because I trust google to keep the data I send them safe, because they know how to run prod at scale, and they are more environmentally friendly than everyone else (tpu,us-central1).

This includes my custom agent / copilot / cowork (which uses vertex ai and all models therein). This is where I do more searching now (with genAi grounding) I'm about to work on several micro projects that will hold Ai a little differently.

All that being said, google Ai products suck hard. I hate using every one of them. This is more a reflection on the continued degradation of PM/Design at Big G, from before Ai, but accellationally worse since. I support removing Logan from the head of this shit show

disclaimer: long time g-stan, not so stan any more

seizethecheese 3 hours ago | parent | prev | next [-]

I use Gemini flash lite in a side project, and it’s stuck on 2.5. It’s now well behind schedule. Any speculation as to what’s going on?

foruhar 2 hours ago | parent [-]

Gemini-3.0-flash-preview came out right away with the 3.0 release and I was expecting 3.0-flash-lite before a bump on the pro model. I wonder if they have abandoned that part of the Pareto/price-performance.

pickle-pixel 21 minutes ago | parent | prev | next [-]

does it still crash out after couple prompts?

eric15342335 3 hours ago | parent | prev | next [-]

My first impression is that the model sounds slightly more human and a little more praising. Still comparing the ability.

1024core 2 hours ago | parent | prev | next [-]

It's been hugged to death. I keep getting "Something went wrong".

trilogic 2 hours ago | parent | prev | next [-]

Humanity last exam 44%, Scicode 59, and that 80, and this 78 but not 100% ever.

Would be nice to see that this models, Plus, Pro, Super, God mode can do 1 Bench 100%. I am missing smth here?

lysecret an hour ago | parent | prev | next [-]

Please I need 3 in ga…

matrix2596 3 hours ago | parent | prev | next [-]

Gemini 3.1 Pro is based on Gemini 3 Pro

skerit 3 hours ago | parent [-]

Lol, and this line:

> Geminin 3.1 Pro can comprehend vast datasets

Someone was in a hurry to get this out the door.

Filip_portive 19 minutes ago | parent | prev | next [-]

My new comment

LZ_Khan 3 hours ago | parent | prev | next [-]

biggest problem is that it's slow. also safety seems overtuned at the moment. getting some really silly refusals. everything else is pretty good.

PunchTornado 3 hours ago | parent | prev | next [-]

The biggest increase is LiveCodeBench Pro: 2887. The rest are in line with Opus 4.6 or slightly better or slightly worse.

shmoogy 3 hours ago | parent [-]

but is it still terrible at tool calls in actual agentic flows?

naiv 3 hours ago | parent | prev | next [-]

ok , so they are scared that 5.3 (pro) will be released today/tomorrow and blow it out of the water and rushed it while they could still reference 5.2 benchmarks.

PunchTornado 3 hours ago | parent [-]

I don't think models blow other models anymore. We have the big 3 which are neck to neck in most benchmarks and the rest. I doubt that 5.3 will blow the others.

scld 3 hours ago | parent [-]

easy now

mustaphah 3 hours ago | parent | prev | next [-]

Google is terrible at marketing, but this feels like a big step forward.

As per the announcement, Gemini 3.1 Pro score 68.5% on Terminal-Bench 2.0, which makes it the top performer on the Terminus 2 harness [1]. That harness is a "neutral agent scaffold," built by researchers at Terminal-Bench to compare different LLMs in the same standardized setup (same tools, prompts, etc.).

It's also taken top model place on both the Intelligence Index & Coding Index of Artificial Analysis [2], but on their Agentic Index, it's still lagging behind Opus 4.6, GLM-5, Sonnet 4.6, and GPT-5.2.

---

[1] https://www.tbench.ai/leaderboard/terminal-bench/2.0?agents=...

[2] https://artificialanalysis.ai

saberience 3 hours ago | parent [-]

Benchmarks aren't everything.

Gemini consistently has the best benchmarks but the worst actual real-world results.

Every time they announce the best benchmarks I try again at using their tools and products and each time I immediately go back to Claude and Codex models because Google is just so terrible at building actual products.

They are good at research and benchmaxxing, but the day to day usage of the products and tools is horrible.

Try using Google Antigravity and you will not make it an hour before switching back to Codex or Claude Code, it's so incredibly shitty.

mustaphah 3 hours ago | parent | next [-]

That's been my experience too; can't disagree. Still, when it comes to tasks that require deep intelligence (esp. mathematical reasoning [1]), Gemini has consistently been the best.

[1] https://arxiv.org/abs/2602.10177

gregorygoc 3 hours ago | parent | prev [-]

What’s so shitty about it?

leecommamichael an hour ago | parent | prev | next [-]

Whoa, I think Gemini 3 Pro was a disappointment, but Gemini 3.1 Pro is definitely the future!

nautilus12 2 hours ago | parent | prev | next [-]

Ok, why don't you work on getting 3.0 out of preview first? 10 min response time is pretty heinous

mucai82 2 hours ago | parent [-]

I agree, according to Googles terms you are not allowed to use the preview model for production use cases. And 3.0 has been in preview for a loooong time now :(

jeffbee 3 hours ago | parent | prev | next [-]

Relatedly, Gemini chat seems to be if not down then extremely slow.

ETA: They apparently wiped out everyone's chats (including mine). "Our engineering team has identified a background process that was causing the missing user conversation metadata and has successfully stopped the process to prevent further impact." El Mao.

sergiotapia 3 hours ago | parent | prev | next [-]

To use in OpenCode, you can update the models it has:

    opencode models --refresh
Then /models and choose Gemini 3.1 Pro

You can use the model through OpenCode Zen right away and avoid that Google UI craziness.

---

It is quite pricey! Good speed and nailed all my tasks so far. For example:

    @app-api/app/controllers/api/availability_controller.rb 
    @.claude/skills/healthie/SKILL.md 

    Find Alex's id, and add him to the block list, leave a comment 
    that he has churned and left the company. we can't disable him 
    properly on the Healthie EMR for now so 
    this dumb block will be added as a quick fix.
Result was:

    29,392 tokens
    $0.27 spent
So relatively small task, hitting an API, using one of my skills, but a quarter. Pricey!
gbalduzzi 2 hours ago | parent [-]

I don't see it even after refresh. Are you using the opencode-gemini-auth plugin as well?

sergiotapia 2 hours ago | parent [-]

No I am not just vanilla OpenCode. I do have OpenCode Zen credits, and I did opencode login whatever their command is to auth against opencode itself. Maybe that's the reason I see these premium models.

himata4113 2 hours ago | parent | prev | next [-]

The visual capabilities of this model are frankly kind of ridicioulus what the hell.

dude250711 3 hours ago | parent | prev | next [-]

I hereby allow you to release models not at the same time as your competitors.

sigmar 3 hours ago | parent [-]

It is super interesting that this is the same thing that happened in November (ie all labs shipping around the same week 11/12-11/23).

zozbot234 2 hours ago | parent [-]

They're just throwing a big Chinese New Year celebration.

johnwheeler 2 hours ago | parent | prev | next [-]

I know Google has anti-gravity but do they have anything like Claude code as far as user interface terminal basically TUI?

alooPotato 2 hours ago | parent [-]

https://github.com/google-gemini/gemini-cli

johnwheeler 2 hours ago | parent [-]

ThankS!!

ChrisArchitect 3 hours ago | parent | prev | next [-]

Blog post: https://blog.google/innovation-and-ai/models-and-research/ge...

throwaw12 2 hours ago | parent | prev | next [-]

Can we switch from Claude Code to Google yet?

Benchmarks are saying: just try

But real world could be different

foruhar an hour ago | parent [-]

My sense is that the Gemini models are very capable but the Gemini CLI experience is subpar compared to Claude Code and Codex. I'm guess that it's the harness but since it can get confused, fall into doom loops, and generally lose the plot in a way that the model does not in Gemini Studio or the Gemini app.

I think a bunch of these harnesses are open source so it surprises me that there can be such a gulf between them.

cmrdporcupine 41 minutes ago | parent [-]

It's not just the tooling. If you use Gemini in opencode it malfunctions in similar ways.

I haven't tried 3.1 yet, but 3 is just incompetent at tool use. In particular in editing chunks of text in files, it gets very confused and goes into loops.

The model also does this thing where it degrades into loops of nonsense thought patterns over time.

For shorter sessions where it's more analysis than execution, it is a strong model.

We'll see about 3.1. I don't know why it's not showing in my gemini CLI as available yet.

jcims 3 hours ago | parent | prev [-]

Pelican on a bicycle in drawio - https://imgur.com/a/tNgITTR

(FWIW I'm finding a lot of utility in LLMs doing diagrams in tools like drawio)

pqdbr 2 hours ago | parent [-]

How are you prompting it to draw diagrams in drawio

jcims an hour ago | parent | next [-]

Sometimes it helps to also provide a drawio file that has the elements you wan't (eg. cloud service icons or whatever), but you just feed it the content you want diagrammed and let it eat.

Even if it's not completely correct, it usually creates something that's much closer to complete than a blank page.

ac29 2 hours ago | parent | prev [-]

Drawio drawings are just XML, its possible it can generate that directly

riku_iki 40 minutes ago | parent [-]

hopefully op will answer if that's what he is doing