| ▲ | sambaumann 2 hours ago | |||||||
I feel like this has gotten much worse since they were introduced. I guess they're optimizing for verbosity in training so they can charge for more tokens. It makes chat interfaces much harder to use IMO. I tried using a custom instruction in chatGPT to make responses shorter but I found the output was often nonsensical when I did this | ||||||||
| ▲ | gs17 2 hours ago | parent [-] | |||||||
Yeah, ChatGPT has gotten so much worse about this since the GPT-5 models came out. If I mention something once, it will repeatedly come back to it every single message after regardless of if the topic changed, and asking it to stop mentioning that specific thing works, except it finds a new obsession. We also get the follow up "if you'd like, I can also..." which is almost always either obvious or useless. I occasionally go back to o3 for a turn (it's the last of the real "legacy" models remaining) because it doesn't have these habits as bad. | ||||||||
| ||||||||