| ▲ | mmaunder 5 days ago | |
Used the hell out of Gemini 3 Flash with some 3 Pro thrown in for the past 3 hours on CUDA/Rust/FFT code that is performance critical, and now have a gemini flavored cocaine hangover and have gone crawling back to Codex GPT 5.2 xhigh and am making slower progress but with higher quality code. Firstly, 3 Flash is wicked fast and seems to be very smart for a low latency model, and it's a rush just watching it work. Much like the YOLO mode that exists in Gemini CLI, Flash 3 seems to YOLO into solutions without fully understanding all the angles e.g. why something was intentionally designed in a way that at first glance may look wrong, but ended up this way through hard won experience. Codex gpt 5.2 xhigh on the other hand does consider more angles. It's a hard come-down off the high of using it for the first time because I really really really want these models to go that fast, and to have that much context window. But it ain't there. And turns out for my purposes the longer chain of thought that codex gpt 5.2 xhigh seems to engage in is a more effective approach in terms of outcomes. And I hate that reality because having to break a lift into 9 stages instead of just doing it in a single wicked fast run is just not as much fun! | ||