| ▲ | r_lee a day ago | |
Plus I've found that overall with "thinking" models, it's more like for memory, not even actual perf boost, it might even be worse because if it goes even slightly wrong on the "thinking" part, it'll then commit to that for the actual response | ||
| ▲ | verdverm a day ago | parent [-] | |
for sure, the difference in the most recent model generations makes them far more useful for many daily tasks. This is the first gen with thinking as a significant mid-training focus and it shows gemini-3-flash stands well above gemini-2.5-pro | ||