Remix.run Logo
bezzi 18 hours ago

is this model just super slow to anyone else?

Topfi 6 hours ago | parent | next [-]

I have seen fluctuations in token/sec. Early yesterday, roughly equivalent to none Codex GPT-5 (this branding ...), late yesterday I had a severe drop off in token/sec. Today, it seems to have improved again and with the lowered amount of unnecessary/rambling token output, GPT-5-Codex (Medium) seems faster overall. LLM rollouts always have this back and forth in token/sec, especially in the first few days.

e1g 17 hours ago | parent | prev [-]

Extremely slow for me - takes minutes to get anything done. Regular GPT5 was much faster. Hoping it’s mostly due to the launch day.

bigwheels 17 hours ago | parent [-]

I've been using gpt-5 on effort=high but for gpt-5-codex, try: `-c model_reasoning_effort=medium`.

On high it is totally unusable.

replyfabric-ai 3 hours ago | parent [-]

even on medium ... gpt-5 was way faster, at least that's my first impression