Remix.run Logo
logicallee 5 hours ago

have you compared it with Claude Code at all? Is there a similar subscription model for Gemini as Claude? Does it have an agent like Claude Code or ChatGPT Codex? what are you using it for? How does it do with large contexts? (Claude AI Code has a 1 million token context).

m-schuetz an hour ago | parent | next [-]

I tried Claude Opus but at least for my tasks, Gemini provided better results. Both were way better than ChatGPT. Haven't done any agents yet, waiting on that until they mature a bit more.

landl0rd 4 hours ago | parent | prev | next [-]

- yes, pretty close to opus performance

- yes

- yes (not quite as good as CC/Codex but you can swap the API instead of using gemini-cli)

- same stuff as them

- better than others, google got long (1mm) context right before anyone else and doesn't charge two kidneys, an arm, and a leg like anthropic

logicallee 3 hours ago | parent [-]

thanks for these answers.

airstrike 5 hours ago | parent | prev [-]

it's nowhere near claude opus

but claude and claude code are different things

dudeinhawaii an hour ago | parent | next [-]

My take has been...

Gemini 3.1 (and Gemini 3) are a lot smarter than Claude Opus 4.6

But...

Gemini 3 series are both mediocre at best in agentic coding.

Single shot question(s) about a code problem vs "build this feature autonomously".

Gemini's CLI harness is just not very good and Gemini's approach to agentic coding leaves a lot to be desired. It doesn't perform the double-checking that Codex does, it's slower than Claude, it runs off and does things without asking and not clearly explaining why.

logicallee 3 hours ago | parent | prev [-]

(Claude Code now runs claude opus, so they're not so different.)

>it's [Gemini] nowhere near claude opus

Could you be a bit more specific, because your sibling reply says "pretty close to opus performance" so it would help if you gave additional information about how you use it and how you feel the two compare. Thanks.