| ▲ | Uehreka 2 hours ago | |
It feels like a lot of people keep falling into the trap of thinking we’ve hit a plateau, and that they can shift from “aggressively explore and learn the thing” mode to “teach people solid facts” mode. A week ago Scott Hanselman went on the Stack Overflow podcast to talk about AI-assisted coding. I generally respect that guy a lot, so I tuned in and… well it was kind of jarring. The dude kept saying things in this really confident and didactic (teacherly) tone that were months out of date. In particular I recall him making the “You’re absolutely right!” joke and asserting that LLMs are generally very sycophantic, and I was like “Ah, I guess he’s still on Claude Code and hasn’t tried Codex with GPT 5”. I haven’t heard an LLM say anything like that since October, and in general I find GPT 5.x to actually be a huge breakthrough in terms of asserting itself when I’m wrong and not flattering my every decision. But that news (which would probably be really valuable to many people listening) wasn’t mentioned on the podcast I guess because neither of the guys had tried Codex recently. And I can’t say I blame them: It’s really tough to keep up with all the changes but also spend enough time in one place to learn anything deeply. But I think a lot of people who are used to “playing the teacher role” may need to eat a slice of humble pie and get used to speaking in uncertain terms until such a time as this all starts to slow down. | ||
| ▲ | orbital-decay 2 hours ago | parent | next [-] | |
> in general I find GPT 5.x to actually be a huge breakthrough in terms of asserting itself when I’m wrong That's just a different bias purposefully baked into GPT-5's engineered personality on post-training. It always tries to contradict the user, including the cases where it's confidently wrong, and keeps justifying the wrong result in a funny manner if pressed or argued with (as in, it would have never made that obvious mistake if it wasn't bickering with the user). GPT-5.0 in particular was extremely strongly finetuned to do this. And in longer replies or multiturn convos, it falls into a loop on contradictory behavior far too easily. This is no better than sycophancy. LLMs need an order of magnitude better nuance/calibration/training, this requires human involvement and scales poorly. Fundamental LLM phenomena (ICL, repetition, serial position biases, consequences of RL-based reasoning etc) haven't really changed, and they're worth studying for a layman to get some intuition. However, they vary a lot model to model due to subtle architectural and training differences, and impossible to keep up because there are so many models and so few benchmarks that measure these phenomena. | ||
| ▲ | alternatetwo an hour ago | parent | prev | next [-] | |
Claude is still just like that once you’re deep enough in the valley of the conversation. not exactly that phrase but things like that’s the smoking gun or so. nothing has changed. | ||
| ▲ | MoltenMan 2 hours ago | parent | prev [-] | |
I agree with a lot of what you've said, but I completely disagree that LLM's are no longer sycophantic. GPT-5 is definitely still very sycophantic, 'You're absolutely right!' still happens, etc. It's true it happens far less in a pure coding context (Claude Code / Codex) but I suspect only because of the system prompts, and those tools are by far in the minority of LLM usage. I think it's enlightening to open up ChatGPT on the web with no custom instructions and just send a regular request and see the way it responds. | ||