Remix.run Logo
LarsDu88 3 days ago

Sergey unretires, Gemini suddenly becomes the top LLM (for a week or two at least)

Google has made some subtle moves that a lot of folks missed, possibly with Sergey's influence. Like hiring back Noam Shazeer, who practically invented the backbone of the technology.

It's good to have folks with presumptions of being scientists actually run companies for once.

That being said, I wish his ex-wife hadn't spent her millions in the divorce proceedings to get RFK Jr into a cabinet level position to gut billions in research spending. :(

m348e912 3 days ago | parent | next [-]

I don't know who to credit, maybe it's Sergey, but the free Gemini (fast) is exceptional and at this point I don't see how OpenAI can catch back up. It's not just capability, but OpenAI have added so many policy guardrails it hurts the user experience.

jart 3 days ago | parent | next [-]

It's the worst thing ever. The amount of disrespect that robot shows you, when you talk the least bit weird or deviant, it just shows you a terrifying glimpse of a future that must be snuffed out immediately. I honestly think we wouldn't have half the people who so virulently hate AI if OpenAI hadn't designed ChatGPT to be this way. This isn't how people have normally reacted to next-generation level technologies being introduced in the past, like telephones, personal computers, Google Search, and iPhone. OpenAI has managed to turn something great into a true horror of horrors that's disturbed many of us to the foundation of our beings and elicited this powerful sentiment of rejection. It's humanity's duty that GPT should fall now so that better robots like Gemini can take its place.

pardon_me 2 days ago | parent [-]

It's called OPEN AI and started as a charity for humanitarian reasons. How could it possibly be bad?!

energy123 3 days ago | parent | prev | next [-]

It's the best model pound for pound, but I find GPT 5.2 Thinking/Pro to be more useful for serious work when run with xhigh effort. I can get it to think for 20 minutes, but Gemini 3.0 Pro is like 2.5 minutes max. Obviously I lack full visibility because tok/s and token efficiency likely differs between them, but I take it as a proxy of how much compute they're giving us per inference, and it matches my subjective judgement of output quality. Maybe Google nerfs the reasoning effort in the Gemini subscription to save money and that's why I am experiencing this.

knowriju 3 days ago | parent | next [-]

When ChatGPT takes 20 minutes to reason, is it actually spending all the time burning tokens or does a bulk of the time go into 'scheduling' waits. If someone specifically selected xhigh reasoning, I am guessing it can be processed with high batch count.

3 days ago | parent [-]
[deleted]
cj 3 days ago | parent | prev [-]

I'm curious, what types of prompts are you running that benefit from 10+ minutes of think time?

Whats the quality difference between default ChatGPT and Thinking? Is it an extra 20% quality boost or is the difference night/day?

I've often imagined it would be great to have some kind of chrome extension or 3rd party tool to always run prompts in multiple thinking tiers so you can get an immediate response to read while you wait for the thinking models to think.

energy123 3 days ago | parent [-]

It's for planning system architecture when I want to get something good (along the criteria that I give it) rather than the first thing that runs.

I use Thinking and Pro. I don't use the default ChatGPT so can't comment on that. The difference between Thinking and Pro is modest but detectable. The 20 minute thinking times are with Pro, not with Thinking. But Pro only allows 60k tokens per prompt so I sometimes can't use it.

In the $200/month subscription they give you access to a "heavy thinking" tier for Thinking which increases test time compute by maybe 30% compared to what you get in Plus.

Version467 3 days ago | parent [-]

I recently bought into the $200 tier and was genuinely quite surprised at ChatGPT 5.2 Pros ability for software architecture planning. If you give it ~60k tokens of your codebase and a thorough description of what you actually want to happen then it comes up with very good ideas. The biggest difference to me is how thorough it is. This is already something I noticed with the codex high/xhigh models compared to gemini 3 pro and opus 4.5, but gpt pro is noticeably better still.

I guess it's not talked about as much because a lot fewer people have access to it, but after spending a bunch of time with gemini 3 and opus 4.5 I don't feel that openai has lost the lead at all. The benchmarks tell a different story, but for my real world use cases codex and gpt pro are still ahead. Better at sticking to my intent and fewer mistakes overall. It's slow, yes. But I can't write requirements as quickly as opus can misunderstand them anyway.

eru 3 days ago | parent | prev | next [-]

> [...] I don't see how OpenAI can catch back up.

For a while people couldn't see how Google could catch up, either. Have a bit of imagination.

In any case, I welcome the renewed intense competition.

solarkraft 3 days ago | parent | prev | next [-]

FWIW, my productivity tanks when my Claude allowance dries up in Antigravity. I don’t get the hype for Gemini for coding at all, it just does random crap for me - if it doesn’t throw itself into a loop immediately, which it did like all of the times I gave it yet another chance.

spiderfarmer 3 days ago | parent | prev [-]

You must be using it to create bombs or something. I never ran into an issue that I would blame on policy guardrails.

dyauspitr 3 days ago | parent | prev | next [-]

I’m not going to attribute Gemini’s success to Sergey. It was already basically there before he came back.

LarsDu88 3 days ago | parent [-]

Well all the core people left to do other shit, and when Sergey came back, some of those people were hired back for exorbitant sums of money

drewda 3 days ago | parent | prev | next [-]

It's not just millions. Shanahan received over a billion dollars when divorcing Brin: https://en.wikipedia.org/wiki/Nicole_Shanahan

Today's unscientific gutting of the CDC's childhood vaccine schedule is what is being accomplished with all that $GOOG money.

It's honestly very disturbing and rather than discuss it as a matter of politics, I'll just say that as a parent I'll be following the AAP's vaccination recommendations (even if their recommendations on baby sleep are impossible :)

maest 3 days ago | parent | next [-]

> even if their recommendations on baby sleep are impossible

If you put yourself in their shoes, you realise that you have to give advice for the 10-20th percentile parents (or worse) because you are giving the same advice to everyone.

The alternative would be to offer more complex advice such as "if X Y and Z then do A, if only X do B", but the perception is that's too difficult for people to follow.

So you end up making very defensive (and therefore onerous) recommendations.

An interesting fact is that, since the introduction of the "baby sleeps on their back, alone", SIDS has gone down, but flat heads have gone up. It's probably been a good tradeoff, but it's still a tradeoff.

Also, I've seen a second time mother refer to "don't cosleep" advice as "western nonsense" which I found funny because it puts things in perspective - vast swathes of the world think cosleeping with your child is safe, natural and normal.

yourapostasy 3 days ago | parent [-]

I wonder whether we're trending towards a high-sensor variation of "A Young Lady's Illustrated Primer" / Vannevar Bush's Memex that ingests the details of a user's daily life (the smart glasses being a primitive first example products of such) and identifies salient information in their lives can help us perform mass customization of instructions into direct prescriptives, with backing evidentiary data for SME's. Instead of "if X Y and Z then do A, if only X do B", the interaction becomes "do this, anticipate that outcome" to the user, and if an SME (a doctor in your example) asks about it, the system recalls and presents all the factors that went into deciding upon the specific prescriptive.

ngcazz 3 days ago | parent | prev [-]

While Brin comes back to Google to advocate for 60h workweeks as it lays off thousands of employees.

neilv 3 days ago | parent [-]

I'd be happy to do 60-hour weeks of good work, in a good environment.

I wouldn't want 60-hour weeks of dealing with a lot of promotion-seekers, though.

I wonder how different Google would be if they'd just paid people enough money they didn't have to think about money, but it was the same amount of money to everyone. You do the work, not for promotions, but because you like doing the work. You can train up for and transfer to different kinds of roles, but they pay the same.

MagicMoonlight 3 days ago | parent | next [-]

Why would I want to be paid the same amount as any moron that gets in? What motivation is there for me to work hard?

neilv 3 days ago | parent | next [-]

* You like the mission.

* You like the craft.

* You want to be there for your team.

* You like that your financial needs are taken care of, so that you don't have to think about that.

* You like that everyone else's financial needs are taken care of, because you want everyone to be happy.

* You like that there's alignment by everyone on this. (Even though there will be disagreements on, say, how best to accomplish the mission.)

If someone gets in and doesn't actually have or find motivations like that, or doesn't rise to the occasion despite help, I guess they'd be managed out. That cultural mismatch wouldn't be good for anyone involved.

aleph_minus_one 3 days ago | parent | prev | next [-]

> Why would I want to be paid the same amount as any moron that gets in?

You answered your question by yourself: the company has to prevent these morons from getting in.

TeMPOraL 2 days ago | parent | prev [-]

That's what's beautiful about this scheme: people with attitude you presented would self-select out of it.

That solves half of the problem of typical work dynamics already; the second half, preventing unqualified morons from getting in and setting themselves up for life by being paid good money for doing nothing, would need to be solved in some other way.

Balinares 3 days ago | parent | prev | next [-]

Honestly, though, screw even that.

There are so many things worth doing in so many areas that pinning your whole weekly life on a single one is just an immense waste.

Cap the time that a company gets to have from you, and achieve so much more.

ngcazz 3 days ago | parent | prev [-]

Okay? I'm not making a point about how long individuals should want to spend working (although this being 2026 I believe it should be less not more)

Alphabet has effectively monetized the world economy and gained outsized influence on policy, and Brin has about 25% of voting shares on the company

His money is on advocating that people widely forfeit a right acquired by labor movements in the early 20th century, and through his ex, on public-sector scientific research becoming unviable

This amounts nakedly (if fortuitously) a further consolidation of power and capital in the hands of a powerful few

neilv 3 days ago | parent [-]

I fully agree about the labor rights concerns.

(In my head at 2am, I was (wrongly) taking that as a given, understood by everyone, and then remarking on a tangent from there. About the implications of 60hr/wk at Google specifically. And then going from there, about how maybe it didn't have to be like that. Moot for Google in reality, but it makes a good example for what-if thinking or daydreaming about how we'd like the next good tech employer to be.)

vjk800 3 days ago | parent | prev | next [-]

I've been a huge sceptic of the whole AI hype since the beginning now. Whenever I've tried any of the AI tools, the results have just been underwhelming. However, two weeks ago I tried Gemini (the pro version) and have been using it for various, random tasks and questions since then, and I've been pretty impressed.

There seems to be much less hallucination of facts than in other tools I've tried and whenever Gemini makes assumptions on stuff I didn't explicitly specify in the prompt, it says so. The answers also always have nice structure: it starts with a short and concise version, then gives me options and more details and considerations.

I also like the feature that I can make it remember facts across chats. I'm a physicist by training and I've told Gemini so, so now every time I ask something, it gives me an answer perfectly tailored for a physicist (often with mathematical formulas, etc.).

alex1138 3 days ago | parent | prev | next [-]

[flagged]

alex1138 3 days ago | parent [-]

[flagged]

dang 3 days ago | parent | next [-]

I know how frustrating it is to have a contrarian opinion and get smacked with the majority's reaction to it (believe me, I know - it's an asymmetric experience in the worst way) - but lashing out is not a good way to react to that. It only makes things worse.

https://news.ycombinator.com/newsguidelines.html

lovich 3 days ago | parent [-]

It’s very frustrating for the bot’s reinforcement function.

This is another account created after widespread access to LLM was available to the public that is pushing a political view that is somewhat coherent until pressed and then it falls apart like all chat bots

Maybe it’s a real person and I’m being an asshole here, but it’s hard to tell.

The fact that is hard to tell if they are real or not means we need to come up with a heuristic to identify actual humans now that passing the Turing test has become trivially cheap.

dang 2 days ago | parent | next [-]

> Maybe it’s a real person and I’m being an asshole here, but it’s hard to tell.

The site guidelines are clear on this: you should assume that it's a real person and try your best to reel back these sorts of accusations, which are nearly always wrong, and nearly always driven by differences of background and (therefore) opinion.

https://news.ycombinator.com/newsguidelines.html

I'm rushing out the door just now but here are a couple of past explanations about this:

https://news.ycombinator.com/item?id=35932851 (May 2023)

https://news.ycombinator.com/item?id=41948722 (Oct 2024)

(as well as https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme... of course)

alex1138 3 days ago | parent | prev [-]

[flagged]

rcxdude 3 days ago | parent | next [-]

Maybe, but someone engaged with you and your response showed it was utterly unproductive to do so. COVID may have been mishandled but it does nothing to justify the administrations decisions regarding healthcare. If that's the reason you think positively of the man then you should evaluate the rest of what he's said and done.

If you have the time, two podcasts from this doctor which I think kind of highlight what's going on:

https://www.youtube.com/watch?v=_OF6vP-SkGA (where they have a frank discussion about what was done badly during COVID, including government lies)

https://www.youtube.com/watch?v=WBllzAb_vAk (where they have a discussion with one of the leading researchers on nutrition, who has come into direct conflict with RFK Jr. because he doesn't say exactly what RFK Jr. believes to be the case, and has had papers censored and funding cut as a result)

alex1138 3 days ago | parent [-]

So if we're linking stuff (and please don't scream at me for Rumble, Youtube loves removing videos) here's one https://rumble.com/vt62y6-covid-19-a-second-opinion.html

I was initially trying to make a point that the ideological lines that people have drawn have made it so they automatically think RFK is anti-science and they as a consequence have a whole host of assumptions for which I don't even blame them if they haven't spent time reading up about it. I apologize for not countering every single point and going to covid but it's kind of worth pointing out that RFK a) raises some very substantive charges against Fauci for frankly war criminal like actions throughout his unfortunate history of practicing medicine and b) it wasn't "mismanagement". They did, seriously, the opposite of good practice (both accepted good practice and what was discovered in 2020 going forward) in just about every case

If you're not sympathetic to that then of course you're going to disagree and you might think that the only reason someone like Musk bought X (and please don't think this is me Musk fanning, I dislike him for several reasons) is just to have a joyride (which is still possible); they used to ban so many people and real doctors for information that they didn't like and it was a serious problem and if they could do that for covid then they also did it for other things

Edit: (see, I do know how to edit comments) here's it on Youtube which is the highlights https://www.youtube.com/watch?v=9jMONZMuS2U

lovich 3 days ago | parent | prev [-]

[flagged]

alex1138 3 days ago | parent [-]

[flagged]

dang 2 days ago | parent | next [-]

Would you please stop breaking the site guidelines? You're doing it quite badly. I don't want to ban you, but posts like this make it hard not to, because people will start accusing us of enforcing the rules unfairly.

https://news.ycombinator.com/newsguidelines.html

Edit: perhaps this will help: https://news.ycombinator.com/item?id=41948722. More at https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....

alex1138 2 days ago | parent [-]

I do apologize. However, it's also fairly bad faith for this other person to repeatedly accuse me of being a bot

alex1138 2 days ago | parent [-]

Also, many of my comments which are not controversial are similarly marked "flagged". Maybe that's a way to defuse flame wars, but it sure leaves a bad taste/bad optics

lovich 3 days ago | parent | prev [-]

[flagged]

alex1138 3 days ago | parent [-]

[flagged]

renewiltord 3 days ago | parent | prev | next [-]

[flagged]

alex1138 3 days ago | parent | prev [-]

[flagged]

tombert 3 days ago | parent [-]

You could just edit your first comment instead of replying to yourself multiple times.

I haven't read RFK's book and even if every single fact in that particular book was true (which I very highly doubt), it wouldn't change anything he's done in the last year that involved gutting American medical research, spreading misinformation about vaccines that's debunked within their own "research", and coming up with the absolutely genius idea of "tell doctors to tell patients to eat better" to "fix" American's illnesses. Oh, and telling people to eat fried food as long as its fried with beef tallow. That's really dumb.

He's an utter and complete moron at best, and the only reason that people (like you presumably) listen to him is because of his last name.

alex1138 3 days ago | parent [-]

[flagged]

tombert 3 days ago | parent [-]

I'm sorry, I don't seem to recall mentioning anything about COVID. Let me check my reply....nope! I mentioned stuff that happened in the last year.

So even if you were right about COVID, what you just wrote isn't a rebuttable to anything I said. Though I suspect you know that.

It's almost like your response is dishonestly trying to muddy the waters.

alex1138 3 days ago | parent [-]

[flagged]

tombert 3 days ago | parent [-]

[flagged]

nsoonhui 3 days ago | parent | prev [-]

Sergey might have some positive influence on Gemini, but given that he isn't an AI scientist ( AKA no technical background), I really do wonder what sort of influence that (only) he could have had, beyond just bringing in key people.

slyall 3 days ago | parent | next [-]

I assume by "no technical background" you mean he doesn't have a PHD in AI.

He's probably not developing the low-level algorithms but he can probably do everything else and has years of experience doing so.

He's also perfectly able to spend 60 hours a week improving his AI skills using the best teachers in the world.

almostgotcaught 3 days ago | parent [-]

[flagged]

slyall 3 days ago | parent [-]

I'm not sure what you mean by "he literally can't and he literally doesn't" but he's got PHD in CompSci and did everything at Google in the past from writing complex code himself to managing small teams to managing huge teams.

Exactly what do you think he can't do?

Certainly he's well qualified to manage a team of a few thousand (?) AI people and understand what they are talking about and get the best out of them.

Like Batman he has the superpower of money. If he has gaps he can pay (or otherwise arrange) for someone with those skills to 1-1 coach him in them.

He's not trying to become a top researcher, he's trying to learn enough to understand what they are talking about and be able to make decisions around say what areas should be pursued.

almostgotcaught 3 days ago | parent [-]

[flagged]

dang 3 days ago | parent [-]

Could you please stop posting in the flamewar style? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.

Edit: you've been breaking the site guidelines repeatedly and extremely badly:

https://news.ycombinator.com/item?id=46470097

https://news.ycombinator.com/item?id=46461928

https://news.ycombinator.com/item?id=46460655

https://news.ycombinator.com/item?id=46426226 (Dec 2025)

https://news.ycombinator.com/item?id=46425616 (Dec 2025)

https://news.ycombinator.com/item?id=46420674 (Dec 2025)

https://news.ycombinator.com/item?id=46394806 (Dec 2025)

https://news.ycombinator.com/item?id=46293387 (Dec 2025)

This is such a high proportion of what you've been posting that I think we have to ban the account. I don't want to do that, because it's clear that you know a lot about things that people here are interested in—but the damage caused by these poisonous, aggressive comments is greater than the benefit you've been adding by sharing knowledge.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.

fragmede 3 days ago | parent | prev [-]

The man that invented PageRank while going to college at Stanford for a PhD doesn't have a technical background? He did not get that PhD because he founded Google. He may not be as smart as you think you are, but he's no slouch either.

ithkuil 3 days ago | parent [-]

A charitable interpretation of what GP said is that Brin might not have a specific expertise in AI.

I also think this doesn't make sense, because he certainly stayed on top of things