Remix.run Logo
HardCodedBias 8 hours ago

Bard was horrible compared to the competition of the time.

Gemini 1.0 was strictly worse than GPT-3.5 and was unusable due to "safety" features.

Google followed that up with 1.5 which was still worse than GPT-3.5 and unbelievably far behind GPT-4. At this same time Google had their "black nazi" scandals.

With Gemini 2.0 finally had a model that was at least useful for OCR and with their fash series a model that, while not up to par in capabilities, was sufficiently inexpensive that it found uses.

Only with Gemini-2.5 did Google catch up with SoTA. It was within "spitting distance" of the leading models.

Google did indeed drop the ball, very, very badly.

I suspect that Sergey coming back helped immensely, somehow. I suspect that he was able to tame some of the more dysfunctional elements of Google, at least for a time.

louisbourgault an hour ago | parent | next [-]

I feel like 1.5 was still pretty good -- my school blocked chatgpt at the time but didn't bother with anything else, so I was using it more than anything else for general research help and it was fine. The blocking fact is probably the biggest reason I use Gemini 90% of the time now, because school can never block google search and ai mode is in that now. That, and the android integration.

To be fair, for my use case (apart from GitHub copilot stuff with Claude 4.5 sonnet) I've never noticed too big of a difference between the actual models, and am more inclined to judge them by their ancillary services and speed, which google excells in.

astrange 3 hours ago | parent | prev [-]

> their fash series

Unfortunate typo.