▲ | richardw a day ago | |
One thing I think I’ve found is: reasoning models get more confident and that makes it harder to dislodge a wrong idea. It feels like I only have 5% of the control, and then it goes into a self-chat where it thinks it’s right and builds on it’s misunderstanding. So 95% of the outcome is driven by rambling, not my input. Windsurf seems to do a good job of regularly injecting guidance so it sticks to what I’ve said. But I’ve had some extremely annoying interactions with confident-but-wrong “reasoning” models. |