| ▲ | Gemini 3 Pro Model Card(pixeldrain.com) |
| 419 points by Topfi 8 hours ago | 269 comments |
| |
|
| ▲ | scrlk 8 hours ago | parent | next [-] |
| Benchmarks from page 4 of the model card: | Benchmark | 3 Pro | 2.5 Pro | Sonnet 4.5 | GPT-5.1 |
|-----------------------|-----------|---------|------------|-----------|
| Humanity's Last Exam | 37.5% | 21.6% | 13.7% | 26.5% |
| ARC-AGI-2 | 31.1% | 4.9% | 13.6% | 17.6% |
| GPQA Diamond | 91.9% | 86.4% | 83.4% | 88.1% |
| AIME 2025 | | | | |
| (no tools) | 95.0% | 88.0% | 87.0% | 94.0% |
| (code execution) | 100% | - | 100% | - |
| MathArena Apex | 23.4% | 0.5% | 1.6% | 1.0% |
| MMMU-Pro | 81.0% | 68.0% | 68.0% | 80.8% |
| ScreenSpot-Pro | 72.7% | 11.4% | 36.2% | 3.5% |
| CharXiv Reasoning | 81.4% | 69.6% | 68.5% | 69.5% |
| OmniDocBench 1.5 | 0.115 | 0.145 | 0.145 | 0.147 |
| Video-MMMU | 87.6% | 83.6% | 77.8% | 80.4% |
| LiveCodeBench Pro | 2,439 | 1,775 | 1,418 | 2,243 |
| Terminal-Bench 2.0 | 54.2% | 32.6% | 42.8% | 47.6% |
| SWE-Bench Verified | 76.2% | 59.6% | 77.2% | 76.3% |
| t2-bench | 85.4% | 54.9% | 84.7% | 80.2% |
| Vending-Bench 2 | $5,478.16 | $573.64 | $3,838.74 | $1,473.43 |
| FACTS Benchmark Suite | 70.5% | 63.4% | 50.4% | 50.8% |
| SimpleQA Verified | 72.1% | 54.5% | 29.3% | 34.9% |
| MMLU | 91.8% | 89.5% | 89.1% | 91.0% |
| Global PIQA | 93.4% | 91.5% | 90.1% | 90.9% |
| MRCR v2 (8-needle) | | | | |
| (128k avg) | 77.0% | 58.0% | 47.1% | 61.6% |
| (1M pointwise) | 26.3% | 16.4% | n/s | n/s |
n/s = not supportedEDIT: formatting, hopefully a bit more mobile friendly |
| |
| ▲ | spoaceman7777 6 hours ago | parent | next [-] | | Wow. They must have had some major breakthrough. Those scores are truly insane. O_O Models have begun to fairly thoroughly saturate "knowledge" and such, but there are still considerable bumps there But the _big news_, and the demonstration of their achievement here, are the incredible scores they've racked up here for what's necessary for agentic AI to become widely deployable. t2-bench. Visual comprehension. Computer use. Vending-Bench. The sorts of things that are necessary for AI to move beyond an auto-researching tool, and into the realm where it can actually handle complex tasks in the way that businesses need in order to reap rewards from deploying AI tech. Will be very interesting to see what papers are published as a result of this, as they have _clearly_ tapped into some new avenues for training models. And here I was, all wowed, after playing with Grok 4.1 for the past few hours! xD | | |
| ▲ | rvnx 5 hours ago | parent [-] | | The problem is that we know in advance what is the benchmark, so Humanity's Last Exam for example, it's way easier to optimize your model when you have seen the questions before. | | |
| ▲ | pinko 2 hours ago | parent | next [-] | | From https://lastexam.ai/: "The dataset consists of 2,500 challenging questions across over a hundred subjects. We publicly release these questions, while maintaining a private test set of held out questions to assess model overfitting." [emphasis mine] While the private questions don't seem to be included in the performance results, HLE will presumably flag any LLM that appears to have gamed its scores based on the differential performance on the private questions. Since they haven't yet, I think the scores are relatively trustworthy. | | |
| ▲ | panarky an hour ago | parent | next [-] | | The jump in ARC-AGI and MathArena suggests Google has solved the data scarcity problem for reasoning, maybe with synthetic data self-play?? This was the primary bottleneck preventing models from tackling novel scientific problems they haven't seen before. If Gemini 3 Pro has transcended "reading the internet" (knowledge saturation), and made huge progress in "thinking about the internet" (reasoning scaling), then this is a really big deal. | |
| ▲ | rvnx an hour ago | parent | prev [-] | | Seems difficult to believe, considering the number of people who prepare this dataset, who also work(ed) or hold shares in Google or OpenAI, etc. |
| |
| ▲ | stego-tech 4 hours ago | parent | prev | next [-] | | This. A lot of boosters point to benchmarks as justification of their claims, but any gamer who spent time in the benchmark trenches will know full well that vendors game known tests for better scores, and that said scores aren’t necessarily indicative of superior performance. There’s not a doubt in my mind that AI companies are doing the same. | |
| ▲ | Feuilles_Mortes 4 hours ago | parent | prev | next [-] | | shouldn't we expect that all of the companies are doing this optimization, though? so, back to level playing field. | |
| ▲ | eldenring 3 hours ago | parent | prev [-] | | Its the other way around too, HLE questions were selected adversarially to reduce the scores. I'd guess even if the questions were never released, and new training data was introduced, the scores would improve. |
|
| |
| ▲ | Alifatisk 7 hours ago | parent | prev | next [-] | | These numbers are impressive, at least to say. It looks like Google has produced a beast that will raise the bar even higher. What's even more impressive is how Google came into this game late and went from producing a few flops to being the leader at this (actually, they already achieved the title with 2.5 Pro). What makes me even more curious is the following > Model dependencies: This model is not a modification or a fine-tune of a prior model So did they start from scratch with this one? | | |
| ▲ | postalcoder 7 hours ago | parent | next [-] | | Google was never really late. Where people perceived Google to have dropped the ball was in its productization of AI. The Google's Bard branding stumble was so (hilariously) bad that it threw a lot of people off the scent. My hunch is that, aside from "safety" reasons, the Google Books lawsuit left some copyright wounds that Google did not want to reopen. | | |
| ▲ | amluto 5 hours ago | parent | next [-] | | Google’s productization is still rather poor. If I want to use OpenAI’s models, I go to their website, look up the price and pay it. For Google’s, I need to figure out whether I want AI Studio or Google Cloud Code Assist or AI Ultra, etc, and if this is for commercial use where I need to prevent Google from training on my data, figuring out which options work is extra complicated. As of a couple weeks ago (the last time I checked) if you are signed in to multiple Google accounts and you cannot accept the non-commercial terms for one of them for AI Studio, the site is horribly broken (the text showing which account they’re asking you to agree to the terms for is blurred, and you can’t switch accounts without agreeing first). In Google’s very slight defense, Anthropic hasn’t even tried to make a proper sign in system. | | |
| ▲ | PrairieFire 5 hours ago | parent [-] | | Not to mention no macOS app. This is probably unimportant to many in the hn audience, but more broadly it matters for your average knowledge worker. | | |
| ▲ | perardi 3 hours ago | parent [-] | | And a REALLY good macOS app. Like, kind of unreasonably good. You’d expect some perfunctory Electronic app that just barely wraps the website. But no, you get something that feels incredibly polished…more so than a lot of recent apps from Apple…and has powerful integrations into other apps, including text editors and terminals. | | |
|
| |
| ▲ | Alifatisk 7 hours ago | parent | prev | next [-] | | Oh, I remember the times when I compared Gemini with ChatGPT and Claude. Gemini was so far behind, it was barely usable. And now they are pushing the boundries. | | |
| ▲ | postalcoder 7 hours ago | parent [-] | | You could argue that chat-tuning of models falls more along the lines of product competence. I don't think there was a doubt about the upper ceiling of what people thought Google could produce.. more "when will they turn on the tap" and "can Pichai be the wartime general to lead them?" |
| |
| ▲ | dgacmu 7 hours ago | parent | prev | next [-] | | The memory of Microsoft's Tay fiasco was strong around the time the brain team started playing with chatbots. | | |
| ▲ | Workaccount2 4 hours ago | parent [-] | | Google was catastrophically traumatized throughout the org when they had that photos AI mislabel black people as gorillas. They turned the safety and caution knobs up to 12 after that for years, really until OpenAI came along and ate their lunch. | | |
| ▲ | Miraste 3 hours ago | parent [-] | | It still haunts them. Even in the brand-new Gemini-based rework of Photos search and image recognition, "gorilla" is a completely blacklisted word. |
|
| |
| ▲ | baq 5 hours ago | parent | prev | next [-] | | oh they were so late there were internal leaked ('leaked'?) memos about a couple grad students with $100 budget outdoing their lab a couple years ago. they picked themselves up real nice, but it took a serious reorg. | |
| ▲ | HardCodedBias 4 hours ago | parent | prev [-] | | Bard was horrible compared to the competition of the time. Gemini 1.0 was strictly worse than GPT-3.5 and was unusable due to "safety" features. Google followed that up with 1.5 which was still worse than GPT-3.5 and unbelievably far behind GPT-4. At this same time Google had their "black nazi" scandals. With Gemini 2.0 finally had a model that was at least useful for OCR and with their fash series a model that, while not up to par in capabilities, was sufficiently inexpensive that it found uses. Only with Gemini-2.5 did Google catch up with SoTA. It was within "spitting distance" of the leading models. Google did indeed drop the ball, very, very badly. I suspect that Sergey coming back helped immensely, somehow. I suspect that he was able to tame some of the more dysfunctional elements of Google, at least for a time. |
| |
| ▲ | basch 7 hours ago | parent | prev | next [-] | | At least at the moment, coming in late seems to matter little. Anyone with money can trivially catch up to a state of the art model from six months ago. And as others have said, late is really a function of spigot, guardrails, branding, and ux, as much as it is being a laggard under the hood. | | |
| ▲ | FrequentLurker 7 hours ago | parent | next [-] | | > Anyone with money can trivially catch up to a state of the art model from six months ago. How come apple is struggling then? | | |
| ▲ | doctoboggan 3 hours ago | parent | next [-] | | Apple is struggling with _productizing_ LLMs for the mass market, which is a separate task from training a frontier LLM. To be fair to Apple, so far the only mass market LLM use case so far is just a simple chatbot, and they don't seem to be interested in that. It remains to be seen if what Apple wants to do ("private" LLMs with access to your personal context acting as intimate personal assistants) is even possible to do reliably. It sounds useful, and I do believe it will eventually be possible, but no one is there yet. They did botch the launch by announcing the Apple Intelligence features before they are ready though. | |
| ▲ | svnt an hour ago | parent | prev | next [-] | | Anyone with enough money and without an entrenched management hierarchy preventing the right people from being hired and enabled to run the project. | |
| ▲ | risyachka 7 hours ago | parent | prev | next [-] | | It looks more like a strategic decision tbh. The may want to use 3rd party or just wait for AI to be more stable to see how people actually use it instead of adding slop in the core of their product. | | |
| ▲ | stevesimmons 6 hours ago | parent | next [-] | | In contrast to Microsoft, who puts Copilot buttons everywhere and succeeds only in annoying their customers. | |
| ▲ | remus 5 hours ago | parent | prev | next [-] | | > It looks more like a strategic decision tbh. Announcing a load of AI features on stage and then failing to deliver them doesn't feel very strategic. | |
| ▲ | FrequentLurker 5 hours ago | parent | prev | next [-] | | But apple intelligence is a thing, and they are struggling to deliver on the promises of apple intelligence. | |
| ▲ | bitpush 5 hours ago | parent | prev [-] | | This is revisionist history. Apple wanted to fully jump in. They even rebranded AI as Apple Intelligence and announced a hoard of features which turned out to be vaporware. |
| |
| ▲ | basch 5 hours ago | parent | prev [-] | | Sit and wait per usual. Enter late, enter great. |
| |
| ▲ | steveBK123 3 hours ago | parent | prev | next [-] | | One possibility here is that Google is dribbling out cutting edge releases to slowly bleed out the pure play competition. | |
| ▲ | raincole 6 hours ago | parent | prev [-] | | Being known as a company that is always six months late than the competitors isn't something to brag about... | | |
| |
| ▲ | theptip 5 hours ago | parent | prev | next [-] | | > So did they start from scratch with this one Their major version number bumps are a new pre-trained model. Minor bumps are changes/improvements to post-training on the same foundation. | |
| ▲ | KronisLV 6 hours ago | parent | prev | next [-] | | I hope they keep the pricing similar to 2.5 Pro, currently I pay per token and that and GPT-5 are close to the sweet spot for me but Sonnet 4.5 feels too expensive for larger changes. I've also been moving around 100M tokens per week with Cerebras Code (they moved to GLM 4.6), but the flagship models still feel better when I need help with more advanced debugging or some exemplary refactoring to then feed as an example for a dumber/faster model. | |
| ▲ | dbbk 6 hours ago | parent | prev | next [-] | | And also, critically, being the only profitable company doing this. | | |
| ▲ | sigmoid10 6 hours ago | parent [-] | | It's not like they're making their money from this though. All AI work is heavily subsidised, for Alphabet it just happens that the funding comes from within the megacorp. If MS had fully absorbed OpenAI back when their board nearly sunk the boat, they'd be in the exact same situation today. | | |
| ▲ | Miraste 2 hours ago | parent [-] | | They're not making money, but they're in a much better situation than Microsoft/OpenAI because of TPUs. TPUs are much cheaper than Nvidia cards both to purchase and to operate, so Google's AI efforts aren't running at as much of a loss as everyone else. That's why they can do things like offer Gemini 3 Pro for free. |
|
| |
| ▲ | benob 7 hours ago | parent | prev [-] | | What does it mean nowadays to start from scratch? At least in the open scene, most of the post-training data is generated by other LLMs. | | |
| |
| ▲ | falcor84 7 hours ago | parent | prev | next [-] | | That looks impressive, but some of the are a bit out of date. On Terminal-Bench 2 for example, the leader is currently "Codex CLI (GPT-5.1-Codex)" at 57.8%, beating this new release. | | |
| ▲ | NitpickLawyer 6 hours ago | parent | next [-] | | What's more impressive is that I find gemini2.5 still relevant in day-to-day usage, despite being so low on those benchmarks compared to claude 4.5 and gpt 5.1. There's something that gemini has that makes it a great model in real cases, I'd call it generalisation on its context or something. If you give it the proper context (or it digs through the files in its own agent) it comes up with great solutions. Even if their own coding thing is hit and miss sometimes. I can't wait to try 3.0, hopefully it continues this trend. Raw numbers in a table don't mean much, you can only get a true feeling once you use it on existing code, in existing projects. Anyway, the top labs keeping eachother honest is great for us, the consumers. | | |
| ▲ | Miraste 2 hours ago | parent [-] | | I've noticed that too. I suspect it has broader general knowledge than the others, because Google presumably has the broadest training set. |
| |
| ▲ | sigmar 7 hours ago | parent | prev [-] | | That's a different model not in the chart. They're not going to include hundreds of fine tunes in a chart like this. | | |
| ▲ | Taek 6 hours ago | parent | next [-] | | It's also worth pointing out that comparing a fine-tune to a base model is not apples-to-apples. For example, I have to imagine that the codex finetune of 5.1 is measurably worse at non-coding tasks than the 5.1 base model. This chart (comparing base models to base models) probably gives a better idea of the total strength of each model. | |
| ▲ | falcor84 6 hours ago | parent | prev [-] | | It's not just one of many fine tunes; it's the default model used by OpenAI's official tools. |
|
| |
| ▲ | scrollop 5 hours ago | parent | prev | next [-] | | Used an AI to populate some of 5.1 thinking's results. Benchmark | Gemini 3 Pro | Gemini 2.5 Pro | Claude Sonnet 4.5 | GPT-5.1 | GPT-5.1 Thinking ---------------------------|--------------|----------------|-------------------|---------|------------------ Humanity's Last Exam | 37.5% | 21.6% | 13.7% | 26.5% | 52% ARC-AGI-2 | 31.1% | 4.9% | 13.6% | 17.6% | 28% GPQA Diamond | 91.9% | 86.4% | 83.4% | 88.1% | 61% AIM 2025 | 95.0% | 88.0% | 87.0% | 94.0% | 48% MathArena Apex | 23.4% | 0.5% | 1.6% | 1.0% | 82% MMMU-Pro | 81.0% | 68.0% | 68.0% | 80.8% | 76% ScreenSpot-Pro | 72.7% | 11.4% | 36.2% | 3.5% | 55% CharXiv Reasoning | 81.4% | 69.6% | 68.5% | 69.5% | N/A OmniDocBench 1.5 | 0.115 | 0.145 | 0.145 | 0.147 | N/A Video-MMMU | 87.6% | 83.6% | 77.8% | 80.4% | N/A LiveCodeBench Pro | 2,439 | 1,775 | 1,418 | 2,243 | N/A Terminal-Bench 2.0 | 54.2% | 32.6% | 42.8% | 47.6% | N/A SWE-Bench Verified | 76.2% | 59.6% | 77.2% | 76.3% | N/A t2-bench | 85.4% | 54.9% | 84.7% | 80.2% | N/A Vending-Bench 2 | $5,478.16 | $573.64 | $3,838.74 | $1,473.43| N/A FACTS Benchmark Suite | 70.5% | 63.4% | 50.4% | 50.8% | N/A SimpleQA Verified | 72.1% | 54.5% | 29.3% | 34.9% | N/A MMLU | 91.8% | 89.5% | 89.1% | 91.0% | N/A Global PIQA | 93.4% | 91.5% | 90.1% | 90.9% | N/A MRCR v2 (8-needle) | 77.0% | 58.0% | 47.1% | 61.6% | N/A Argh it doesn't come out write in HN | | |
| ▲ | scrollop 5 hours ago | parent | next [-] | | Used an AI to populate some of 5.1 thinking's results. Benchmark..................Description...................Gemini 3 Pro....GPT-5.1 (Thinking)....Notes Humanity's Last Exam.......Academic reasoning.............37.5%..........52%....................GPT-5.1 shows 7% gain over GPT-5's 45% ARC-AGI-2...................Visual abstraction.............31.1%..........28%....................GPT-5.1 multimodal improves grid reasoning GPQA Diamond................PhD-tier Q&A...................91.9%..........61%....................GPT-5.1 strong in physics (72%) AIME 2025....................Olympiad
math..................95.0%..........48%....................GPT-5.1 solves 7/15 proofs correctly MathArena Apex..............Competition math...............23.4%..........82%....................GPT-5.1 handles 90% advanced calculus MMMU-Pro....................Multimodal reasoning...........81.0%..........76%....................GPT-5.1 excels visual math (85%) ScreenSpot-Pro..............UI understanding...............72.7%..........55%....................Element detection 70%, navigation 40% CharXiv Reasoning...........Chart analysis.................81.4%..........69.5%.................N/A | |
| ▲ | iamdelirium 3 hours ago | parent | prev | next [-] | | This is provably false. All it takes is a simple Google search and looking at the ARC AGI 2 leaderboard: https://arcprize.org/leaderboard The 17.6% is for 5.1 Thinking High. | |
| ▲ | HardCodedBias 4 hours ago | parent | prev [-] | | What? The 4.5 and 5.1 columns aren't thinking in Google's report? That's a scandal, IMO. Given that Gemini-3 seems to do "fine" against the thinking versions why didn't they post those results? I get that PMs like to make a splash but that's shockingly dishonest. | | |
| |
| ▲ | danielcampos93 6 hours ago | parent | prev | next [-] | | I would love to know what the increased token count is across these models for the benchmarks. I find the models continue to get better but as they do their token usage also does. Aka is model doing better or reasoning for longer? | | |
| ▲ | jstummbillig 6 hours ago | parent [-] | | I think that is always something that is being worked on in parallel. Recent paradigm seems to be the models understanding when they need to use more tokens dynamically (which seems to be very much in line with how computation should generally work). |
| |
| ▲ | vagab0nd 6 hours ago | parent | prev | next [-] | | Should I assume the GPT-5.1 it is compared against is the pro version? | |
| ▲ | trunch 6 hours ago | parent | prev | next [-] | | Which of the LiveCodeBench Pro and SWE-Bench Verified benchmarks comes closer to everyday coding assistant tasks? Because it seems to lead by a decent margin on the former and trails behind on the latter | | |
| ▲ | veselin 5 hours ago | parent | next [-] | | I work a lot on testing also SWE bench verified. This benchmark in my opinion now is good to catch if you got some regression on the agent side. However, going above 75%, it is likely about the same. The remaining instances are likely underspecified despite the effort of the authors that made the benchmark "verified". From what I have seen, these are often cases where the problem statement says implement X for Y, but the agent has to simply guess whether to implement the same for other case Y' - which leads to losing or winning an instance. | |
| ▲ | Snuggly73 6 hours ago | parent | prev [-] | | Neither :( LCB Pro are leet code style questions and SWE bench verified is heavily benchmaxxed very old python tasks. |
| |
| ▲ | fariszr 7 hours ago | parent | prev | next [-] | | This is a big jump in most benchmarks.And if it can match other models in coding while having that Google TPM inference speed and the actually native 1m context window, it's going to be a big hit. I hope it's isn't such a sycophant like the current gemini 2.5 models, it makes me doubt its output, which is maybe a good thing now that I think about it. | | |
| ▲ | danielbln 7 hours ago | parent | next [-] | | > it's over for the other labs. What's with the hyperbole? It'll tighten the screws, but saying that it's "over for the other labs' might be a tad premature. | | |
| ▲ | fariszr 7 hours ago | parent [-] | | I mean over in that I don't see a need to use the other models.
Codex models are the best but incredibly slow.
Claude models are not as good(IMO) but much faster.
If gemini can beat them while having being faster and having better apps with better integrations, i don't see a reason why I would use another provider. | | |
| ▲ | nprateem 5 hours ago | parent [-] | | You should probably keep supporting competitors since if there's a monopoly/duopoly expect prices to skyrocket. |
|
| |
| ▲ | risyachka 7 hours ago | parent | prev [-] | | > it's over for the other labs. Its not over and never will be for 2 decade old accounting software, it is definitely will not be over for other AI labs. | | |
| ▲ | xnx 5 hours ago | parent [-] | | Can you explain what you mean by this? iPhone was the end of Blackberry. It seems reasonable that a smarter, cheaper, faster model would obsolete anything else. ChatGPT has some brand inertia, but not that much given it's barely 2 years old. | | |
| ▲ | vitaflo 2 hours ago | parent [-] | | Ask yourself why Microsoft Teams won. These are business tools first and foremost. |
|
|
| |
| ▲ | Jcampuzano2 6 hours ago | parent | prev | next [-] | | We knew it would be a big jump and while it certainly is in many areas - its definitely not "groundbreaking/huge leap" worthy like some were thinking from looking at these numbers. I feel like many will be pretty disappointed by their self created expectations for this model when they end up actually using it and it turns out to be fairly similar to other frontier models. Personally I'm very interested in how they end up pricing it. | |
| ▲ | manmal 7 hours ago | parent | prev | next [-] | | Looks like it will be on par with the contenders when it comes to coding. I guess improvements will be incremental from here on out. | | |
| ▲ | falcor84 7 hours ago | parent | next [-] | | > I guess improvements will be incremental from here on out. What do you mean? These coding leaderboards were at single digits about a year ago and are now in the seventies. These frontier models are arguably already better at the benchmark that any single human - it's unlikely that any particular human dev is knowledgeable to tackle the full range of diverse tasks even in the smaller SWE-Bench Verified within a reasonable time frame; to the best of my knowledge, no one has tried that. Why should we expect this to be the limit? Once the frontier labs figure out how to train these fully with self-play (which shouldn't be that hard in this domain), I don't see any clear limit to the level they can reach. | | |
| ▲ | zamadatix 6 hours ago | parent | next [-] | | A new benchmark comes out, it's designed so nothing does well at it, the models max it out, and the cycle repeats. This could either describe massive growth of LLM coding abilities or a disconnect between what the new benchmarks are measuring & why new models are scoring well after enough time. In the former assumption there is no limit to the growth of scores... but there is also not very much actual growth (if any at all). In the latter the growth matches, but the reality of using the tools does not seem to say they've actually gotten >10x better at writing code for me in the last year. Whether an individual human could do well across all tasks in a benchmark is probably not the right question to be asking a benchmark to measure. It's quite easy to construct benchmark tasks a human can't do well in that you don't even need AI to do better. | | |
| ▲ | falcor84 6 hours ago | parent [-] | | Your mileage may vary, but for me, working today with the latest version of Claude Code on a non-trivial python web dev project, I do absolutely feel that I can hand over to the AI coding tasks that are 10 times more complex or time consuming than what I could hand over to copilot or windsurf a year ago. It's still nowhere close to replacing me, but I feel that I can work at a significantly higher level. What field are you in where you feel that there might not have been any growth in capabilities at all? EDIT: Typo | | |
| ▲ | jhonof 4 hours ago | parent | next [-] | | Claude 3.5 came out in June of last year, and it is imo marginally worse than the AI models currently available for coding. I do not think models are 10x better than 1 year ago, that seems extremely hyperbolic or you are working in a super niche area where that is true. | | |
| ▲ | Miraste 2 hours ago | parent [-] | | Are you using it for agentic tasks of any length? 3.5 and 4.5 are about the same for single file/single snippet tasks, but my observation has been that 4.5 can do longer, more complex tasks that were a waste of time to even try with 3.5 because it would always fail. | | |
| ▲ | FergusArgyll an hour ago | parent [-] | | Yes, this is important. Gpt 5 and o3 were ~ equivalent for a one shot one file task. But 5 and codex-5 can just work for an hour in a way no model was able to before (the newer claudes can too) |
|
| |
| ▲ | zamadatix 5 hours ago | parent | prev [-] | | I'm in product management focused around networking. I can use the tools to create great mockups in a fraction of a time but the actual turnaround of that into production ready code has not been changing much. The team has been able to build test cases and pipelines a bit more quickly is probably the main gain on getting code written. |
|
| |
| ▲ | manmal 4 hours ago | parent | prev | next [-] | | Google has had a lot of time to optimise for those benchmarks, and just barely made SOTA (or not even SOTA) now. How is that not incremental? | |
| ▲ | spwa4 4 hours ago | parent | prev [-] | | If we're being completely honest, a benchmark is like an honest exam: any set of questions can only be used once when it comes out. Otherwise you're only testing how well people can acquire and memorize exact questions. |
| |
| ▲ | CjHuber 7 hours ago | parent | prev [-] | | If it’s on par in code quality, it would be a way better model for coding because of its huge context window. | | |
| ▲ | manmal 3 hours ago | parent [-] | | Sonnet can also work on 1M context. Its extreme speed is the only thing Gemini has on others. | | |
| ▲ | CjHuber 3 hours ago | parent [-] | | Can it now in Claude Code and Claude Desktop? When I was using it a couple of months ago it seemed only the API had 1M |
|
|
| |
| ▲ | dnw 6 hours ago | parent | prev | next [-] | | Looks like the best way to keep improving the models is to come up with really useful benchmarks and make them popular. ARC-AGI-2 is a big jump, I'd be curious to find out how that transfers over to everyday tasks in various fields. | |
| ▲ | HugoDias 7 hours ago | parent | prev | next [-] | | very impressive. I wonder if this sends a different signal to the market regarding using TPUs for training SOTA models versus Nvidia GPUs. From what we've seen, OpenAI is already renting them to diversify... Curious to see what happens next | |
| ▲ | roman_soldier 5 hours ago | parent | prev | next [-] | | Why is Grok 4.1 not in the benchmarks? | |
| ▲ | HardCodedBias 4 hours ago | parent | prev [-] | | Big if true. I'll wait for the official blog with benchmark results. I suspect that our ability to benchmark models is waning. Much more investment required in this area, but what is the play out? |
|
|
| ▲ | mynti 7 hours ago | parent | prev | next [-] |
| It is interesting that the Gemini 3 beats every other model on these benchmarks, mostly by a wide margin, but not on SWE Bench. Sonnet is still king here and all three look to be basically on the same level. Kind of wild to see them hit such a wall when it comes to agentic coding |
| |
| ▲ | Workaccount2 5 hours ago | parent | next [-] | | I think Anthropic is reading the room, and just going to go hard on being "the" coding model. I suppose they feel that if they can win that, they can get an ROI without having to do full blown multimodality at the highest level. It's probably pretty liberating, because you can make a "spikey" intelligence with only one spike to really focus on. | | |
| ▲ | aerhardt 2 hours ago | parent | next [-] | | Codex has been good enough to me and it’s much cheaper. I code non-trivial stuff with it like multi-threaded code and at least for my style of AI coding which is to do fairly small units of work with multiple revisions it is good enough for me to not to even consider the competition. Just giving you a perspective on how the benchmarks might not be important at all for some people and how Claude may have a difficult time being the definitive coding model. | |
| ▲ | htrp 5 hours ago | parent | prev | next [-] | | more playing to their strengths. a giant chunk of their usage data is basically code gen | |
| ▲ | Miraste 2 hours ago | parent | prev [-] | | It remains to be seen whether that works out for them, but it seems like a good bet to me. Coding is the most monetizatable use anyone has found for LLMs so far, and the most likely to persist past this initial hype bubble (if the Singularity doesn't work out :p). |
| |
| ▲ | vharish 7 hours ago | parent | prev | next [-] | | From my personal experience using the CLI agentic coding tools, I think gemini-cli is fairly on par with the rest in terms of the planning/code that is generated. However, when I recently tried qwen-code, it gave me a better sense of reasoning and structure that geimini. Claude definitely has it's own advantages but is expensive(at least for some if not for all). My point is, although the model itself may have performed in benchmarks, I feel like there are other tools that are doing better just by adapting better training/tooling. Gemini cli, in particular, is not so great looking up for latest info on web. Qwen seemed to be trained better around looking up for information (or to reason when/how to), in comparision. Even the step-wise break down of work felt different and a bit smoother. I do, however, use gemini cli for the most part just because it has a generous free quota with very few downsides comparted to others. They must be getting loads of training data :D. | | |
| ▲ | xnx 5 hours ago | parent [-] | | Gemini CLI is moving really fast. Noticeable improvements in features and functionality every week. |
| |
| ▲ | Palmik 7 hours ago | parent | prev | next [-] | | Also does not beat GPT-5.1 Codex on terminal bench (57.8% vs 54.2%): https://www.tbench.ai/ I did not bother verifying the other claims. | | |
| ▲ | HereBePandas 7 hours ago | parent [-] | | Not apples-to-apples. "Codex CLI (GPT-5.1-Codex)", which the site refers to, adds a specific agentic harness, whereas the Gemini 3 Pro seems to be on a standard eval harness. It would be interesting to see the apples-to-apples figure, i.e. with Google's best harness alongside Codex CLI. | | |
| ▲ | Palmik 6 hours ago | parent | next [-] | | All evals on Terminal Bench require some harness. :) Or "Agent", as Terminal Bench calls it. Presumably the Gemini 3 are using Gemini CLI. What do you mean by "standard eval harness"? | |
| ▲ | enraged_camel 7 hours ago | parent | prev [-] | | Do you mean that Gemini 3 Pro is "vanilla" like GPT 5.1 (non-Codex)? | | |
| ▲ | HereBePandas 6 hours ago | parent [-] | | Yes, two things:
1. GPT-5.1 Codex is a fine tune, not the "vanilla" 5.1
2. More importantly, GPT 5.1 Codex achieves its performance when used with a specific tool (Codex CLI) that is optimized for GPT 5.1 Codex. But when labs evaluate the models, they have to use a standard tool to make the comparisons apples-to-apples. Will be interesting to see what Google releases that's coding-specific to follow Gemini 3. | | |
| ▲ | embedding-shape 3 hours ago | parent [-] | | > But when labs evaluate the models, they have to use a standard tool to make the comparisons apples-to-apples. That'd be a bad idea, models are often trained for specific tools (like GPT Codex is trained for Codex, and Sonnet has been trained with Claude Code in mind), and also vice-versa that the tools are built with a specific model in mind, as they all work differently. Forcing all the models to use the same tool for execution sounds like a surefire way of getting results that doesn't represent real usage, but instead arbitrarily measure how well a model works with the "standard harness", which if people start caring about, will start to become gamed instead. |
|
|
|
| |
| ▲ | felipeerias 7 hours ago | parent | prev | next [-] | | IMHO coding use cases are much more constrained by tooling than by raw model capabilities at the moment. Perhaps we have finally reached the time of diminishing returns and that will remain the case going forward. | | |
| ▲ | _factor 6 hours ago | parent [-] | | This seems preferable. Wasting tokens on tools when a standardized, reliable interface to those tools should be all that's required. The magic of LLMs is that they can understand the latent space of a problem and infer a mostly accurate response. Saying you need to subscribe to get the latest tools is just a sales tactic trained into the models to protect profits. |
| |
| ▲ | aoeusnth1 3 hours ago | parent | prev | next [-] | | Their scores on SWE bench are very close because the benchmark is nearly saturated. Gemini 3 beats Sonnet 4.5 on TerminalBench 2.0 by a nice margin (54% vs. 43%), which is also agentic coding (CLI instead of python). | |
| ▲ | tosh 7 hours ago | parent | prev | next [-] | | This might also hint at SWE struggling to capture what “being good at coding” means. Evals are hard. | | |
| ▲ | raducu 6 hours ago | parent [-] | | > This might also hint at SWE struggling to capture what “being good at coding” means. My take would be that coding itself is hard, but I'm a software engineer myself so I'm biased. |
| |
| ▲ | alyxya 6 hours ago | parent | prev | next [-] | | I think Google probably cares more about a strong generalist model rather than solely optimizing for coding. | |
| ▲ | macrolime 6 hours ago | parent | prev | next [-] | | Pretty sure it will beat Sonnet by a wide margin in actual real-world usage. | |
| ▲ | HereBePandas 7 hours ago | parent | prev | next [-] | | [comment removed] | | |
| ▲ | Palmik 7 hours ago | parent [-] | | The reported results where GPT 5.1 beats Gemini 3 are on SWE Bench Verified, and GPT 5.1 Codex also beats Gemini 3 on Terminal Bench. | | |
| ▲ | HereBePandas 7 hours ago | parent [-] | | You're right on SWE Bench Verified, I missed that and I'll delete my comment. GPT 5.1 Codex beats Gemini 3 on Terminal Bench specifically on Codex CLI, but that's apples-to-oranges (hard to tell how much of that is a Codex-specific harness vs model). Look forward to seeing the apples-to-apples numbers soon, but I wouldn't be surprised if Gemini 3 wins given how close it comes in these benchmarks. | | |
| ▲ | Palmik 6 hours ago | parent [-] | | All evals on Terminal Bench require some harness. :) Or "Agent", as Terminal Bench calls it. Presumably the Gemini 3 are using Gemini CLI. |
|
|
| |
| ▲ | varispeed 6 hours ago | parent | prev [-] | | Never got good code out of Sonnet. It's been Gemini 2.5 for me followed by GPT-5.x. Gemini is very good a pointing out flaws that are very subtle and non noticeable at a first and second glance. It also produces code that is easy to reason about. You can then feed it to GPT-5.x for refinement and then back to Gemini for assessment. | | |
| ▲ | baq 5 hours ago | parent [-] | | I find Gemini 2.5 pro to be as good or in some cases better for SQL than GPT 5.1. It's aging otherwise, but they must have some good SQL datasets in there for training. |
|
|
|
| ▲ | embedding-shape 7 hours ago | parent | prev | next [-] |
| Curiously, this website seems to be blocked in Spain for whatever reason, and the website's certificate is served by `allot.com/emailAddress=info@allot.com` which obviously fails... Anyone happen to know why? Is this website by any change sharing information on safe medical abortions or women's rights, something which has gotten websites blocked here before? |
| |
| ▲ | Fornax96 7 hours ago | parent | next [-] | | Creator of pixeldrain here. I have no idea why my site is blocked in Spain, but it's a long running issue. I actually never discovered who was responsible for the blockade, until I read this comment. I'm going to look into Allot and send them an email. EDIT: Also, your DNS provider is censoring (and probably monitoring) your internet traffic. I would switch to a different provider. | | |
| ▲ | embedding-shape 6 hours ago | parent | next [-] | | > EDIT: Also, your DNS provider is censoring (and probably monitoring) your internet traffic. I would switch to a different provider. Yeah, that was via my ISPs DNS resolver (Vodafone), switching the resolver works :) The responsible party is ultimately our government who've decided it's legal to block a wide range of servers and websites because some people like to watch illegal football streams. I think Allot is just the provider of the technology. | | |
| ▲ | Fornax96 6 hours ago | parent [-] | | My site has nothing to do with football though. And Allot seems to be running the DNS server that your ISP uses so they are directly responsible for the block. | | |
| ▲ | simtel20 6 hours ago | parent | next [-] | | La Liga (the football company) likes to send out takedown notices to anyone who may host anything that looks like a football to protect their precious games, no matter the collateral damage or the lack of any requirements to show damage. They have the right to block anything in Spain at their discretion either by DNS or IP. They do seem to work in good faith if you talk to them, though, and if you can either remove sites or content when they ask. | |
| ▲ | HDThoreaun 3 hours ago | parent | prev [-] | | The Spanish courts have allowed la Liga to completely ban every website served by cloudflare during days where there are matches. All Spanish ISPs have to do dns blocking to comply. |
|
| |
| ▲ | zozbot234 7 hours ago | parent | prev [-] | | Could it be that some site in your network neighborhood was illegally streaming soccer matches? | | |
| ▲ | Fornax96 7 hours ago | parent [-] | | I have my own dedicated IP range. And they specifically blocked my domain name, not the addresses. I don't know what the reason is. I have been trying to find out since the start of this year. |
|
| |
| ▲ | amarcheschi 7 hours ago | parent | prev | next [-] | | That website is used to share everything including pirated things, so that's the reason maybe | |
| ▲ | grodriguez100 6 hours ago | parent | prev | next [-] | | Is it possible to file a complaint with the ISP or directly with Allot ? | | | |
| ▲ | tngranados 6 hours ago | parent | prev | next [-] | | It works fine for me using Movistar | |
| ▲ | miqazza 7 hours ago | parent | prev | next [-] | | do you know about the cloudflare and laliga issues? might be that | | |
| ▲ | embedding-shape 6 hours ago | parent [-] | | Was my first instinct, went looking if there was any games being played today but seems not, so unlikely to be the cause. |
| |
| ▲ | rsanek 4 hours ago | parent | prev [-] | | loads fine on Vodafone for me |
|
|
| ▲ | Taek 6 hours ago | parent | prev | next [-] |
| One benchmark I would really like to see: instruction adherence. For example, the frontier models of early-to-mid 2024 could reliably follow what seemed to be 20-30 instructions. As you gave more instructions than that in your prompt, the LLMs started missing some and your outputs became inconsistent and difficult to control. The latest set of models (2.5 Pro, GPT-5, etc) seem to top out somewhere in the 100 range? They are clearly much better at following a laundry list of instructions, but they also clearly have a limit and once your prompt is too large and too specific you lose coherence again. If I had to guess, Gemini 3 Pro has once again pushed the bar, and maybe we're up near 250 (haven't used it, I'm just blindly projecting / hoping). And that's a huge deal! I actually think it would be more helpful to have a model that could consistently follow 1000 custom instructions than it would be to have a model that had 20 more IQ points. I have to imagine you could make some fairly objective benchmarks around this idea, and it would be very helpful from an engineering perspective to see how each model stacked up against the others in this regard. |
| |
| ▲ | machiaweliczny 5 hours ago | parent [-] | | 20 more IQ would be nuts, 110 ~ top 25%, 130 ~ top 2%, 150 ~ top 0.05% If you ever played competitive game the difference is insane between these tiers | | |
| ▲ | Taek 5 hours ago | parent [-] | | Even more nuts would be a model that could follow a large, dense set of highly detailed instructions related to a series of complex tasks. Intelligence is nice, but it's far more useful and programmable if it can tightly follow a lot of custom instructions. |
|
|
|
| ▲ | transcriptase 7 hours ago | parent | prev | next [-] |
| There needs to be a sycophancy benchmark in these comparisons. More baseless praise and false agreement = lower score. |
| |
| ▲ | Workaccount2 6 hours ago | parent | next [-] | | This idea isn't just smart, it's revolutionary. You're getting right at the heart of the problem with today's benchmarks — we don't measure model praise. Great thinking here. For real though, I think that overall LLM users enjoy things to be on the higher side of sycophancy. Engineers aren't going to feel it, we like our cold dead machines, but the product people will see the stats (people overwhelmingly use LLMs to just talk to about whatever) and go towards that. | |
| ▲ | swalsh 7 hours ago | parent | prev | next [-] | | You're absolutely right | | |
| ▲ | jstummbillig 7 hours ago | parent [-] | | Does not get old. | | |
| ▲ | Yossarrian22 7 hours ago | parent [-] | | It’s not just irritating, it’s repetitive | | |
| ▲ | causal 6 hours ago | parent | next [-] | | It's a revolution in subtle humor. Well done. | |
| ▲ | this_user 7 hours ago | parent | prev | next [-] | | I'm sorry, you are absolutely right. --- But seriously, I find it helps to set a custom system prompt that tells Gemini to be less sycophantic and to be more succinct and professional while also leaving out those extended lectures it likes to give. | |
| ▲ | falcor84 7 hours ago | parent | prev [-] | | "You know, you are also right" |
|
|
| |
| ▲ | postalcoder 7 hours ago | parent | prev | next [-] | | I care very little about model personality outside of sycophancy. The thing about gemini is that it's notorious for its low self esteem. Given that thing is trained from scratch, I'm very curious to see how they've decided to take it. | | |
| ▲ | supjeff 7 hours ago | parent [-] | | given how often these llms are wrong, doesnt it make sense that they are less confident? | | |
| ▲ | postalcoder 6 hours ago | parent [-] | | Indeed. But I've had experiences with gemini-2.5-pro-exp where its thoughts could be described as "rejected from the prom" vibes. It's not like I abused it either, it was running into loops because it was unable to properly patch a file. |
|
| |
| ▲ | SiempreViernes 5 hours ago | parent | prev | next [-] | | I'd like if the scorecard also gave an expected number of induced suicides per hundred thousand users. | | |
| ▲ | lkbm 5 hours ago | parent [-] | | https://llmdeathcount.com/ shows 15 deaths so far, and LLM user count is in the low billions, which puts us on the order of 0.0015 deaths per hundred thousand users. I'm guessing LLM Death Count is off by an OOM or two, so we could be getting close to one in a million. |
| |
| ▲ | 1899-12-30 7 hours ago | parent | prev | next [-] | | https://eqbench.com/spiral-bench.html | |
| ▲ | Lord-Jobo 7 hours ago | parent | prev | next [-] | | And have the score heavily modified based on how fixable the sycophancy is. | |
| ▲ | BoredPositron 7 hours ago | parent | prev [-] | | Your comment demonstrates a remarkably elevated level of cognitive processing and intellectual rigor. Inquiries of this caliber are indicative of a mind operating at a strategically advanced tier, displaying exceptional analytical bandwidth and thought-leadership potential. Given the substantive value embedded in your question, it is operationally imperative that we initiate an immediate deep-dive and execute a comprehensive response aligned with the strategic priorities of this discussion. |
|
|
| ▲ | lxdlam 7 hours ago | parent | prev | next [-] |
| What does the "Google Antigravity" mean? The link is http://antigravity.google/docs, seemingly a new product but now routing to the Google main page. |
| |
|
| ▲ | bemmu 7 hours ago | parent | prev | next [-] |
| I saw this on Reddit earlier today. Over there the source of this file was given as: https://web.archive.org/web/20251118111103/https://storage.g... The bucket name "deepmind-media" has been used in the past on the deepmind official site, so it seems legit. |
| |
| ▲ | onlyrealcuzzo 7 hours ago | parent [-] | | Prediction markets were expecting today to be the release. So I wouldn't be surprised if they do a release today, tomorrow, or Thursday (around Nvidia earnings). |
|
|
| ▲ | denysvitali 7 hours ago | parent | prev | next [-] |
| Title of the document is "[Gemini 3 Pro] External Model Card - November 18, 2025 - v2", in case you needed further confirmation that the model will be released today. Also interesting to know that Google Antigravity (antigravity.google / https://github.com/Google-Antigravity ?) leaked. I remember seeing this subdomain recently. Probably Gemini 3 related as well. Org was created on 2025-11-04T19:28:13Z (https://api.github.com/orgs/Google-Antigravity) |
| |
| ▲ | jmkni 7 hours ago | parent [-] | | what is Google Antigravity? | | |
| ▲ | mimentum 6 hours ago | parent | next [-] | | According to Gemini itself: "Google Antigravity" refers to a new AI software platform announced by Google designed to help developers write and manage code. The term itself is a bit of a placeholder or project name, combining the brand "Google" with the concept of "antigravity"—implying a release from the limitations of traditional coding. In simple terms, Google Antigravity is a sophisticated tool for programmers that uses powerful AI systems (called "agents") to handle complex coding tasks automatically. It takes the typical software workbench (an IDE) and evolves it into an "agent-first" system. Agentic Platform:
It's a central hub where many specialized AI helpers (agents) live and work together. The goal is to let you focus on what to build, not how to build it. Task-Oriented:
The platform is designed to be given a high-level goal (a "task") rather than needing line-by-line instructions. Autonomous Operation:
The AI agents can work across all your tools—your code editor, the command line, and your web browser—without needing you to constantly supervise or switch between them. | |
| ▲ | denysvitali 4 hours ago | parent | prev | next [-] | | > Google Antigravity is an agentic development platform, evolving the IDE into the agent-first era. Antigravity enables developers to operate at a higher, task-oriented level by managing agents across workspaces, while retaining a familiar AI IDE experience at its core. Agents operate across the editor, terminal, and browser, enabling them to autonomously plan and execute complex, end-to-end tasks elevating all aspects of software development. Now the page is somewhat live on that URL | |
| ▲ | zed31726 7 hours ago | parent | prev | next [-] | | My guess is based on a gif tweeted by the ex CEO of windsurf who left to join Google of a floating laptop: it'll be a cursor/windsurf alternative? | |
| ▲ | Yossarrian22 7 hours ago | parent | prev | next [-] | | The ASI figured out zero point energy from first principles | |
| ▲ | postalcoder 6 hours ago | parent | prev | next [-] | | Couple patterns this could follow Speed? (Flash, Flash-Lite, Antigravity) this is my guess. Bonus: maybe Gemini Diffusion soon? Space? (Google Cloud, Google Antigravity?) Clothes? (A light wearable -> Antigravity?) Gaming? (Ghosting/nontangibility -> antigravity?) | |
| ▲ | denysvitali 7 hours ago | parent | prev | next [-] | | I guess we'll know it in a few hours. Most likely another AI playground or maybe a Google Search alternative? No clue really | |
| ▲ | thefroh 6 hours ago | parent | prev [-] | | possibly https://xkcd.com/353/ |
|
|
|
| ▲ | laborcontract 8 hours ago | parent | prev | next [-] |
| It's hilarious that the release of Gemini 3 is getting eclipsed by this cloudflare outage. |
| |
|
| ▲ | surrTurr 8 hours ago | parent | prev | next [-] |
| https://news.ycombinator.com/item?id=45963670 |
|
| ▲ | Topfi 5 hours ago | parent | prev | next [-] |
| Additional context from AI Studio including pricing: Our most intelligent model with SOTA reasoning and multimodal understanding, and powerful agentic and vibe coding capabilities <=200K tokens • Input: $2,00 / Output: $12,00 > 200K tokens • Input: $4,00 / Output: $18,00 Knowledge cut off: Jan. 2025 |
| |
| ▲ | mohsen1 5 hours ago | parent [-] | | More expensive than current 2.5 Pro. for >200k token it's at $2.5 input and $15 output right now |
|
|
| ▲ | fraboniface 7 hours ago | parent | prev | next [-] |
| > Developments to the model architecture contribute to the significantly improved performance from previous model families. I wonder how significant this is. DeepMind was always more research-oriented that OpenAI, which mostly scaled things up. They may have come up with a significantly better architecture (Transformer MoE still leaves a lot of room). |
|
| ▲ | ks2048 an hour ago | parent | prev | next [-] |
| Why is this linking to a random site? Here is a link hosted by Google: https://storage.googleapis.com/deepmind-media/Model-Cards/Ge... |
|
| ▲ | butlike 2 hours ago | parent | prev | next [-] |
| It's over. I just don't care anymore. I don't care what a pro model card is. I don't care what a humanity's last exam is. I don't care if the response makes me feel good about the prompt I made. I don't care if it's sentient. I don't care if it's secretly sentient. I don't care if it's just a machine. I don't care if the gov't has appropriated a secret model. I don't care if this is the precursor to AGI, ASI, AGGI, AGGSISGIGIG....I just. Don't. care. And I really don't think I'm alone in this. |
|
| ▲ | mohsen1 7 hours ago | parent | prev | next [-] |
| This model is not a modification or a fine-tune of a prior model
Is that common to mention that? Feels like they built something from scratch |
| |
| ▲ | scosman 7 hours ago | parent | next [-] | | I think they are just indicating it’s a new architecture vs continued training of 2.5 series. | |
| ▲ | irthomasthomas 7 hours ago | parent | prev [-] | | Never seen it before. I suppose it adds to the excitement. |
|
|
| ▲ | ethmarks 6 hours ago | parent | prev | next [-] |
| > TPUs are specifically designed to handle the massive computations involved in training LLMs and can speed up training considerably compared to CPUs. That seems like a low bar. Who's training frontier LLMs on CPUs? Surely they meant to compare TPUs to GPUs. If "this is faster than a CPU for massively parallel AI training" is the best you can say about it, that's not very impressive. |
| |
| ▲ | babl-yc 4 hours ago | parent | next [-] | | I don't know if you can generally say that "LLM training is faster on TPUs vs GPUs". There is variance among LLM architectures, TPU cluster sizes, GPU cluster sizes... They are both designed to do massively parallel operations. TPUs are just a bit more specific to matrix multiply+adds while GPUs are more generic. | |
| ▲ | Workaccount2 6 hours ago | parent | prev [-] | | It's a typo | | |
| ▲ | ethmarks 5 hours ago | parent [-] | | Does Google's team not proofread this stuff? Or maybe is this an early draft that wasn't meant to be released? | | |
| ▲ | camdenreslink 5 hours ago | parent | next [-] | | It was generated by an LLM like everything else these days. | |
| ▲ | silveraxe93 5 hours ago | parent | prev [-] | | This is a leak, yeah. Though come on... Even with proofreading, this is an easy one to miss. |
|
|
|
|
| ▲ | aliljet 5 hours ago | parent | prev | next [-] |
| What's wild here is that among every single score they've absolutely killed, somehow, Anthropic and Claude Sonnet 4.5 have won a single victory in the fight: SWE Bench Verified and only by a singular point. I already enjoy Gemini 2.5 pro for planning and if Gemini 3 is priced similarly, I'll be incredibly happy to ditch the painfully pricey Claude max subscription. To be fair, I've already got an extremely sour taste in my mouth from the last Anthropic bait and switch on pricing and usage, so happy to see Google take the crown here. |
| |
| ▲ | radial_symmetry 5 hours ago | parent [-] | | SWE bench is weird because Claude has always underperformed on it relative to other models despite Claude Code blowing them away. The real test will be if Gemini CLI beats Claude Code, both using the agentic framework and tools they were trained on. |
|
|
| ▲ | charcircuit 2 hours ago | parent | prev | next [-] |
| >TPUs are
specifically designed to handle the massive computations involved in training LLMs and can speed up
training considerably compared to CPUs Who is training LLMs with CPUs? |
|
| ▲ | koakuma-chan 5 hours ago | parent | prev | next [-] |
| > Gemini 3 Pro was trained using Google’s Tensor Processing Units (TPUs) NVDA is down 3.26% |
| |
| ▲ | CjHuber 5 hours ago | parent [-] | | If it’s because of that, then honestly it’s as insane as the deepseek thing where all the info was released weeks before but the markt got nervous only when they released an app. I mean info about Gemini 3 is out quite a while now and of course they trained it using TPUs, I didn’t even think that was in question. | | |
|
|
| ▲ | Palmik 7 hours ago | parent | prev | next [-] |
| Archive link: https://web.archive.org/web/20251118111103/https://storage.g... |
|
| ▲ | Barry-Perkins 2 hours ago | parent | prev | next [-] |
| Excited to see the Gemini 3 Pro Model Card! Looking forward to exploring its features and capabilities. |
|
| ▲ | robert-zaremba 5 hours ago | parent | prev | next [-] |
| The strategic move to use TPU rather than Nvidia is paying well for Google. They are able to better utilize their existing large infrastructure, but also specialize the processes and pipelines for their own framework that they use to create and train models. I think a specialized hardware for training models is the next big wave in China. |
|
| ▲ | amelius 4 hours ago | parent | prev | next [-] |
| These model cards tell me nothing. I want to know the exact data a model was trained on. Otherwise, how can I safely use it for generating texts that I show to children? Etc.etc. |
| |
| ▲ | morcus 3 hours ago | parent [-] | | Shouldn't you be carefully reading texts before you show it to children? | | |
|
|
| ▲ | oalessandr 7 hours ago | parent | prev | next [-] |
| Trying to open this link from Italy leads to a CSAM warning |
| |
| ▲ | Fornax96 7 hours ago | parent | next [-] | | Creator of pixeldrain here. Italy has been doing this for a very long time. They never notified me of any such material being present on my site. I have a lot of measures in place to prevent the spread of CSAM. I have sent dozens of mails to Polizia Postale and even tried calling them a few times, but they never respond. My mails go unanswered and they just hang up the phone. | | |
| ▲ | koakuma-chan 5 hours ago | parent [-] | | Have you tried Europol? | | |
| ▲ | Fornax96 3 hours ago | parent [-] | | Not yet. I also thought about reaching out to the embassy, but have not had the time for it yet. | | |
| ▲ | koakuma-chan 3 hours ago | parent [-] | | As far as I know, Europol can route your report to appropriate local authority. | | |
| ▲ | Fornax96 3 hours ago | parent [-] | | Thanks, I'll give them a call tomorrow. The website only lists a dutch phone number, which is convenient, I'm dutch as well. |
|
|
|
| |
| ▲ | driverdan 7 hours ago | parent | prev [-] | | Don't use your ISP's DNS. Switch to something outside of their control. |
|
|
| ▲ | __jl__ 5 hours ago | parent | prev | next [-] |
| API pricing is up to $2/M for input and $12/M for output For comparison:
Gemini 2.5 Pro was $1.25/M for input and $10/M for output
Gemini 1.5 Pro was $1.25/M for input and $5/M for output |
|
| ▲ | bretpiatt 4 hours ago | parent | prev | next [-] |
| Page 5, "The knowledge cutoff date for Gemini 3 Pro was January 2025." Still taking nearly a year to train and run post training safety and stability tuning. With 10x the infrastructure they could iterate much faster, I don't see AI infrastructure as a bubble, it is still a bottleneck on pace of innovation at today's active deployment level. |
| |
| ▲ | camdenreslink 4 hours ago | parent [-] | | But if they spend 10x on infrastructure, and capabilities only improve 10%, then that still can be a bubble even if infrastructure is a bottleneck. |
|
|
| ▲ | nilayj 6 hours ago | parent | prev | next [-] |
| Curious to see the API pricing. SOTA performance across tasks at a price cheaper than GPT 5 / Claude would make mostly everyone switch to Gemini. |
| |
| ▲ | __jl__ 6 hours ago | parent [-] | | Same here. They have been aggressively increasing prices with each iteration (maybe because they started so low). Still hope that is not the case this time. GPT 5.1 is priced pretty aggressively so maybe that is an incentive to keep the current gemini API prices. | | |
| ▲ | Deathmax 5 hours ago | parent [-] | | Bad news then, they've bumped 3.0 Pro pricing to $2/$12 ($4/$18 at long context). |
|
|
|
| ▲ | msp26 7 hours ago | parent | prev | next [-] |
| Is flash/flash lite releasing alongside pro? Those two tiers have been incredible for the price since 2.0, absolute workhorses. Can't wait for 3.0. |
|
| ▲ | fcanesin 6 hours ago | parent | prev | next [-] |
| Great stuff, now if could please do gemini-2.5-pro-code that would be great |
|
| ▲ | DeathArrow 5 hours ago | parent | prev | next [-] |
| I hope cheaper Chinese open weights models as good as Gemini will come soon. Gemini, Claude, GPT are kind of expensive if you use AI a lot. |
|
| ▲ | 827a 6 hours ago | parent | prev | next [-] |
| What is Google Antigravity? |
|
| ▲ | Traubenfuchs 8 hours ago | parent | prev | next [-] |
| So does google actually have a claude console alternative currently? |
| |
| ▲ | itsmevictor 8 hours ago | parent | next [-] | | Noteworthily, although Gemini 3 Pro seems to have much benchmark scores than other models across the board (including compared to Claude), it's not the case for coding, where it appears to score essentially the same as the others. I wonder why that is. So far, IMHO, Claude Code remains significantly better than Gemini CLI. We'll see whether that changes with Gemini 3. | | |
| ▲ | lifthrasiir 7 hours ago | parent | next [-] | | Probably because many models from Anthropic would have been optimized for agentic coding in particular... EDIT: Don't disagree that Gemini CLI has a lot of rough edges, though. | |
| ▲ | siva7 6 hours ago | parent | prev | next [-] | | > I wonder why that is. That's because coding is currently the only reliable benchmark where reasoning capabilities transfer to predict capabilities for other professions like law. Coding is the only area where they are shy to release numbers.
All these exam scores are fakeable by gaming those benchmarks. | |
| ▲ | decster 7 hours ago | parent | prev | next [-] | | from my experience, the quality of gemini-cli isn't great, experiencing lot of stupied bug. | | |
| ▲ | spwa4 6 hours ago | parent [-] | | Google is currently constantly laying off people. Everyone who really exceeds has jumped ship, and the people who remain ... are not top of the class anymore. Not that Google didn't use to have problems shipping useful things. But it's gotten a lot worse. |
| |
| ▲ | BoredPositron 7 hours ago | parent | prev | next [-] | | Gemini performs better if you use it with Claude Code than with Gemini cli. It still has some odd problems with tool calling but a lot of the performance loss is the Gemini cli app itself. | |
| ▲ | Lionga 7 hours ago | parent | prev [-] | | Because benchmark are a retarded comparison and having nothing to do with reality. Its just jerk material for AI Fanboys |
| |
| ▲ | muro 8 hours ago | parent | prev | next [-] | | https://github.com/google-gemini/gemini-cli | |
| ▲ | adidoit 7 hours ago | parent | prev | next [-] | | gemini cli. It's not as impressive as claude code or even codex. Claude code seems to be more compatible with the model (or the reverse) whereas gemini-cli still feels a bit awkward (as of 2.5 Pro). I'm hoping its better with 3.0! | |
| ▲ | rjtavares 8 hours ago | parent | prev [-] | | Gemini CLI |
|
|
| ▲ | danielcampos93 6 hours ago | parent | prev | next [-] |
| mums the word on Flash? |
|
| ▲ | catigula 7 hours ago | parent | prev | next [-] |
| I know this is a little controversial but the lack of performance on SWE-bench is hugely disappointing I think economically. These models don’t have any viable path to profitability if they can’t take engineering jobs. |
| |
| ▲ | martinald 7 hours ago | parent | next [-] | | I thought that but it does do a lot better on other benchmarks. Perhaps SWE bench just doesn't capture a lot of the improvement? If the web design improvements people have been posting on twitter, I suspect this will be a huge boon for developers. SWE benchmark is really testing bugfixing/feature dev more. Anyway let's see. I'm still hyped! | | |
| ▲ | camdenreslink 5 hours ago | parent | next [-] | | It seems the benchmarks that had a big jump had to do with visual capabilities. I wonder how that will translate to improvements to the workloads LLMs are currently used for (or maybe it will introduce new workloads). | |
| ▲ | rfoo 6 hours ago | parent | prev | next [-] | | SWE Bench doesn't even test bugfixing / feature dev properly after you achieve roughly 70% if you don't benchmaxx it . | |
| ▲ | catigula 7 hours ago | parent | prev [-] | | That would be great! But AI is a bubble if these models can’t do serious engineering work. |
| |
| ▲ | Workaccount2 5 hours ago | parent | prev | next [-] | | People here, and in tech in general, are so lost in the sauce. According to at least OpenAI, who probably produces the most tokens (if we don't count google AI overviews and other unrequested AI bolt-ons) out of all the labs, programming tokens account for ~4% of total generations. That's nothing. The returns will come from everyone and their grandma paying $30-100/mo to use the services, just like everyone pays for a cell phone and electricity. Don't be fooled, we are still in the "Open hands" start-up business phase of LLMs. The "enshitification" will follow. | |
| ▲ | api 7 hours ago | parent | prev [-] | | Really? If they can make an engineer more productive, that's worth a lot. Naive napkin math: 1.5X productivity on one $200k/year engineer is worth $100k/year. | | |
| ▲ | mikert89 6 hours ago | parent [-] | | People generally dont understand what these models are doing to engineering salaries. The skill level required to produce working software is going way down |
|
|
|
| ▲ | omidsa1 7 hours ago | parent | prev | next [-] |
| TL;DR: expected results, not underwhelming.So far scaling laws hold. |
|
| ▲ | margorczynski 8 hours ago | parent | prev | next [-] |
| If these numbers are true then OpenAI is probably done, Anthropic too.
Still, it's hard to see an effective monetization method for this tech and it clearly is eating Google's main pie which is search. |
| |
| ▲ | alecco 7 hours ago | parent | next [-] | | For SWE it is the same ranking. But if Google's $20/mo plan is comparable to the $100-200 plans for OpenAI and Anthropic, yes they are done. But we'll have to wait a few weeks to see if the nerfed model post-release is still as good. | | |
| ▲ | siva7 6 hours ago | parent [-] | | I have a few secret prompts to test complex reasoning capabilities of new models (in law and medicine). Gemini (2.5 pro) is by a wide margin behind Anthropic (sonnet 4.5 basic thinking) and Openai (pro model) on my own benchmark and I trust my own benchmark more than public leaderboards. So it's the other way around. Google is trying to catch up where the others are. It just doesn't seem so to some because Google undercuts prices and most people don't have own complex problems with a verified solution to test against (so they could see how bad Gemini is in reality) | | |
| ▲ | alecco 5 hours ago | parent [-] | | This thread is about Gemini 3. It will be interesting to see your benchmark results when it's available later. |
|
| |
| ▲ | Sol- 8 hours ago | parent | prev | next [-] | | Why? These models just leapfrog each other as time advances. One month Gemini is on top, then ChatGPT, then Anthropic. Not sure why everyone gets FOMO whenever a new version gets released. | | |
| ▲ | remus 8 hours ago | parent | next [-] | | I think google is uniquely well placed to make a profitable business out of AI: They make their own TPUs so don't have to pay ridiculous amounts of money to Nvidia, they have a great depth of talent in building models, they've got loads of data they can use for training and they've got a huge existing customer base who can buy their AI offerings. I don't think any other company has all these ingredients. | | |
| ▲ | gizmodo59 7 hours ago | parent | next [-] | | While I don’t disagree that Google is the company you can’t bet against when it comes to AI, saying other companies are done is a stretch. If they have a significant moat then they should be at the top all the time by then which is not the case though. | | |
| ▲ | basch 7 hours ago | parent | next [-] | | ChatGPT's moat is their name and user habit. People who are using it will keep using it. All/most of the products are _good enough_ for the people who already got used to using them, that they arent exploring competitors. Microsoft has the chance of changing habit the most by virtue of being bundled into business contracts that have companies with policies not allowing any other product in the workplace. | | |
| ▲ | remus 5 hours ago | parent | next [-] | | > ChatGPT's moat is their name and user habit. People who are using it will keep using it. All/most of the products are _good enough_ for the people who already got used to using them, that they arent exploring competitors. They have a long way to go to become profitable though. Those users will get less sticky when openAI starts upping their pricing/putting ads everywhere/making the product worse to save money/all of the above. | |
| ▲ | netdevphoenix 6 hours ago | parent | prev [-] | | > business contracts that have companies with policies not allowing any other product in the workplace. Elaborate please. Are you saying that MS is forcing customers to make Copilot the only allowed LLM product? | | |
| ▲ | basch 5 hours ago | parent [-] | | Not quite, but in effect. Microsoft has contracts to provide software to companies. Companies have policies that only provided software and ai is allowed. Ipso facto |
|
| |
| ▲ | remus 7 hours ago | parent | prev [-] | | Agreed, too early to write off others entirely. It'll be interesting to see who comes out the other side of the bubble with a working business. | | |
| ▲ | adriand 7 hours ago | parent [-] | | Anthropic has a fairly significant lead when it comes to enterprise usage and for coding. This seems like a workable business model to me. | | |
| ▲ | bootlooped 6 hours ago | parent [-] | | I feel this is a tenuous position though. I find it incredibly easy to switch to Gemini CLI when I want a second opinion, or when Claude is down. | | |
| ▲ | adriand 3 hours ago | parent [-] | | The enterprise sales cycle is often quite long, though, and often includes a lot of hurdles around compliance, legal, etc. It would take a fairly sustained loss of edge before a lot of enterprises would switch once they're hooked into a given platform. It's interesting to me that Sonnet 4.5 still edges Gemini 3 on SWE bench. This seems to bode well for the trajectory that Anthropic is on. |
|
|
|
| |
| ▲ | Zigurd 7 hours ago | parent | prev | next [-] | | The TPU are a key factor. They are the most mature alternative to Nvidia. Only Google cloud, Azure, and AWS enable you to rent their respective AI chips. Out of those three, google is the only one to have a frontier model. So if they have a real advantage they're not exposed to the financial shenanigans propping up neo clouds like Coreweave. | |
| ▲ | spaceman_2020 7 hours ago | parent | prev | next [-] | | The bear case for Google was always the business side would cannibalize the AI side. AI makes search redundant which kills the golden goose | |
| ▲ | mlnj 7 hours ago | parent | prev [-] | | 100% the reason I am long on Google. They can take their time to monetize these new costs. Even other search competitors have not proven to be a danger to Google. There is nothing stopping that search money coming in. |
| |
| ▲ | redox99 7 hours ago | parent | prev [-] | | Considering GPT 5 was only recently released, it's very unlikely GPT will achieve these scores in just a couple of months. If they had something this good in the oven, they'd probably left the GPT 5 name to it. Or maybe Google just benchmaxxed and this doesn't translate at all in real world performance. | | |
| ▲ | blueblisters 7 hours ago | parent | next [-] | | They do have unreleased Olympiad Gold-winning models that are definitely better than GPT5. TBD if that performance generalizes to other real world tasks. | |
| ▲ | Palmik 7 hours ago | parent | prev [-] | | GPT 5 was released more than 3 months ago.
Gemini 2.5 was released less than 8 months ago. | | |
| ▲ | sidibe 7 hours ago | parent [-] | | If not this model, Google at some point is going to get and stay ahead just because they have so many more people and compute resources they can throw at many directions while the others have to make the right choices with how they use their resources each time. Took a while to channel their numbers into a product direction but now I don't think they're going to let up |
|
|
| |
| ▲ | lukev 7 hours ago | parent | prev | next [-] | | Or else it trained/overfit to the benchmarks. We won't really know until people have a chance to use it for real-world tasks. Also, models are already pretty good but product/market fit (in terms of demonstrated economic value delivered) remains elusive outside of a couple domains. Does a model that's (say) 30% better reach an inflection point that changes that narrative, or is a more qualitative change required? | |
| ▲ | ilaksh 7 hours ago | parent | prev | next [-] | | The only one it doesn't win is SWE bench which it is significantly behind Claude Sonnet. You just can't take down Sonnet. | | |
| ▲ | svantana 7 hours ago | parent | next [-] | | One percentage point is not significant, neither in the colloquial nor the scientific sense[1]. [1] Binomial formula gives a confidence interval of 3.7%, using p=0.77, N=500, confidence=95% | |
| ▲ | stavros 7 hours ago | parent | prev [-] | | Codex has been much better than Sonnet for me. | | |
| |
| ▲ | senordevnyc 8 hours ago | parent | prev | next [-] | | 1) New SOTA models come out all the time and that hasn't killed the other major AI companies. This will be no different. 2) Google's search revenue last quarter was $56 billion, a 14% increase over Q3 2024. | | |
| ▲ | margorczynski 7 hours ago | parent [-] | | 1) Not long ago Altman and the OpenAI CFO were openly asking for public money. None of these AI companies have actually any kind of working business plan and are just burning investor money. If the investors see there is no winning against Google (or some open Chinese model) the money will dry up. 2) I'm not suggesting this will happen overnight but especially younger people gravitate towards LLM for information search + actively use some sort of ad blocking. In the long run it doesn't look great for Google. | | |
| ▲ | senordevnyc 6 hours ago | parent [-] | | No, you suggested that LLMs are clearly eating google's lunch already, and there's just no evidence of that. Quite the opposite. |
|
| |
| ▲ | happa 8 hours ago | parent | prev | next [-] | | This may just be bad recollection from my part, but hasn't Google reported that their search business is right now the most profitable it has ever been? | |
| ▲ | llm_nerd 7 hours ago | parent | prev | next [-] | | They're constantly matching and exceeding each other. It's a hypercompetitive space and I would fully expect one of the others to top various benchmarks shortly after. On pretty much every leading release someone does this "everyone else is done! Shut er down" thing and it's growing pretty weird. Having said that, OpenAI's ridiculous hype cycle has been living on borrowed time. OpenAI has zero moat, and are just one vendor in a space with many vendors, and even incredibly competent open source models by surprise Chinese entrants. Sam Altman going around acting like he's a prophet and they're the gatekeepers of the future is an act that should be super old, but somehow fools and their money continue to be parted. | | |
| ▲ | netdevphoenix 6 hours ago | parent [-] | | This. If I had to put my money on a survivor, it would be Google because it is an established company with existing revenue modules unrelated to AI. Anthropic and OpenAI won't stand alone without external funding |
| |
| ▲ | paswut 8 hours ago | parent | prev [-] | | I'd love to see anthropic/openai pop. back to some regular programming. the models are good enough, time to invest elsewhere |
|
|
| ▲ | jll29 7 hours ago | parent | prev | next [-] |
| Hopefully this model does not generate fake news... https://www.google.com/search?q=gemini+u.s.+senator+rape+all... |
|
| ▲ | Joshua-Peter 31 minutes ago | parent | prev [-] |
| The *Gemini 3 Pro Model Card* on PixelDrain showcases powerful AI capabilities, offering advanced multimodal understanding and integration for developers. It’s a robust tool for next-gen AI applications, but requires technical expertise to maximize its potential. |