Remix.run Logo
typs 3 days ago

I’m not sure I fully understand the rationale of having newer mini versions (eg o3-mini, o4-mini) when previous thinking models (eg o1) and smart non-thinking models (eg gpt-4.1) exist. Does anyone here use these for anything?

sho_hn 3 days ago | parent | next [-]

I use o3-mini-high in Aider, where I want a model to employ reasoning but not put up with the latency of the non-mini o1.

drvladb 3 days ago | parent | prev [-]

o1 is a much larger, more expensive to operate on OpenAI's end. Having a smaller "newer" (roughly equating newer to more capable) model means that you can match the performance of larger older models while reducing inference and API costs.