| ▲ | rubslopes 3 hours ago | |||||||||||||
I don't understand this sentiment. It may hold true for other LLM use cases (image generation, creative writing, summarizing large texts), but when it comes to coding specifically, Google is *always* behind OpenAI and Anthropic, despite having virtually infinite processing power, money, and being the ones who started this race in the first place. Until now, I've only ever used Gemini for coding tests. As long as I have access to GPT models or Sonnet/Opus, I never want to use Gemini. Hell, I even prefer Kimi 2.5 over it. I tried it again last week (Gemini Pro 3.0) and, right at the start of the conversation, it made the same mistake it's been making for years: it said "let me just run this command," and then did nothing. My sentiment is actually the opposite of yours: how is Google *not* winning this race? | ||||||||||||||
| ▲ | hobofan 3 hours ago | parent [-] | |||||||||||||
> despite having virtually infinite processing power, money Just because they have the money doesn't mean that they spend it excessively. OpenAI and Anthropic are both offering coding plans that are possibly severely subsidized, as they are more concerned with growth at all cost, while Google is more concerned with profitability. Google has the bigger warchest and could just wait until the other two run out of money rather than forcing the growth on that product line in unprofitable means. Maybe they are also running much closer to their compute limits then the other ones too and their TPUs are already saturated with API usage. | ||||||||||||||
| ||||||||||||||