| ▲ | frde 5 hours ago | |||||||
Don't want to sound rude, but anytime anyone says this I assume they haven't tried using agentic coding tools and are still copy pasting coding questions into a web input box I would be really curious to know what tools you've tried and are using where gemini feels better to use | ||||||||
| ▲ | f311a 4 hours ago | parent | next [-] | |||||||
It's good enough if you don't go wild and allow LLMs to produce 5k+ lines in one session. In a lot of industries, you can't afford this anyway, since all code has to be carefully reviewed. A lot of models are great when you do isolated changes with 100-1000 lines. Sometimes it's okay to ship a lot of code from LLMs, especially for the frontend. But, there are a lot of companies and tasks where backend bugs cost a lot, either in big customers or direct money. No model will allow you to go wild in this case. | ||||||||
| ▲ | dudeinhawaii 2 hours ago | parent | prev | next [-] | |||||||
My experience is that on large codebases that get tricky problems, you eventually get an answer quicker if you can send _all_ the context to a relevant large model to crunch on it for a long period of time. Last night I was happily coding away with Codex after writing off Gemini CLI yet again due to weirdness in the CLI tooling. I ran into a very tedious problem that all of the agents failed to diagnose and were confidently patching random things as solutions back and forth (Claude Code - Opus 4.6, GPT-5.3 Codex, Gemini 3 Pro CLI). I took a step back, used python script to extract all of the relevant codebase, and popped open the browser and had Gemini-3-Pro set to Pro (highest) reasoning, and GPT-5.2 Pro crunch on it. They took a good while thinking. But, they narrowed the problem down to a complex interaction between texture origins, polygon rotations, and a mirroring implementation that was causing issues for one single "player model" running through a scene and not every other model in the scene. You'd think the "spot the difference" would make the problem easier. It did not. I then took Gemini's proposal and passed it to GPT-5.3-Codex to implement. It actually pushed back and said "I want to do some research because I think there's a better code solution to this". Wait a bit. It solved the problem in the most elegant and compatible way possible. So, that's a long winded way to say that there _is_ a use for a very smart model that only works in the browser or via API tooling, so long as it has a large context and can think for ages. | ||||||||
| ▲ | parliament32 2 hours ago | parent | prev | next [-] | |||||||
Every time I've tried to use agentic coding tools it's failed so hard I'm convinced the entire concept is a bamboozle to get customers to spend more tokens. | ||||||||
| ▲ | gman83 4 hours ago | parent | prev | next [-] | |||||||
You need to stick Gemini in a straightjacket; I've been using https://github.com/ClavixDev/Clavix. When using something like that, even something like Gemini 3 Flash becomes usable. If not, it more often than not just loses the plot. | ||||||||
| ▲ | segfaultex 4 hours ago | parent | prev | next [-] | |||||||
Conversely, I have yet to see agentic coding tools produce anything I’d be willing to ship. | ||||||||
| ▲ | m00x 4 hours ago | parent | prev [-] | |||||||
Gemini is a generalist model and works better than all existing models at generalist problems. Coding has been vastly improved in 3.0 and 3.1, but Google won't give us the full juice as Google usually does. | ||||||||
| ||||||||