▲ | wahnfrieden 5 days ago | |||||||
It doesn’t though. Fast but dumb models don’t progressively get better with more iterations. | ||||||||
▲ | Jcampuzano2 5 days ago | parent | next [-] | |||||||
There are many ways to skin a cat. Often all it takes is to reset to a checkpoint or undo and adjust the prompt a bit with additional context and even dumber models can get things right. I've used grok code fast plenty this week alongside gpt 5 when I need to pull out the big guns and it's refreshing using a fast model for smaller changes or for tasks that are tedious but repetitive during things like refactoring. | ||||||||
| ||||||||
▲ | dmix 5 days ago | parent | prev [-] | |||||||
That very much depends on the usecase Different models for different things. Not everyone is solving complicated things every time they hit cmd-k in Cursor or use autocomplete, and they can easily switch to a different model when working harder stuff out via longer form chat. |