▲ | alvis a day ago | |||||||
It's interest to see this quote: `for the bottom 10% of user turns sorted by model-generated tokens (including hidden reasoning and final output), GPT‑5-Codex uses 93.7% fewer tokens than GPT‑5` It sounds like it can make simple tasks much more correct. It's impressive to me. Today coding agent tends to pretend they're working hard by generating lots of unnecessary code. Hope it's true | ||||||||
▲ | bn-l a day ago | parent [-] | |||||||
This is my issue with gpt-5. If you use the low or medium reasoning it’s garbage. If you use high, it’ll think for up to five minutes on something dead simple. | ||||||||
|