| ▲ | rsanek 4 days ago |
| Looks to be the ~same intelligence as gpt-oss-120B, but about 10x slower and 3x more expensive? https://artificialanalysis.ai/models/deepseek-v3-1-reasoning |
|
| ▲ | easygenes 3 days ago | parent | next [-] |
| Other benchmark aggregates are less favorable to GPT-OSS-120B: https://arxiv.org/abs/2508.12461 |
| |
| ▲ | petesergeant 3 days ago | parent [-] | | With all these things, it depends on your own eval suite. gpt-oss-120b works as well as o4-mini over my evals, which means I can run it via OpenRouter on Cerebras where it's SO DAMN FAST and like 1/5th the price of o4-mini. | | |
| ▲ | indigodaddy 3 days ago | parent [-] | | How would you compare gpt-oss-120b to (for coding): Qwen3-Coder-480B-A35B-Instruct GLM4.5 Air Kimi K2 DeepSeek V3 0324 / R1 0528 GPT-5 Mini Thanks for any feedback! | | |
| ▲ | petesergeant 3 days ago | parent [-] | | I’m afraid I don’t use any of those for coding | | |
| ▲ | bigyabai 3 days ago | parent [-] | | You're missing out. GLM 4.5 Air and Qwen3 A3B both blow OSS 120B out of the water in my experience. | | |
| ▲ | indigodaddy 3 days ago | parent [-] | | Ah good to hear! How about Qwen3-Coder-480B-A35B-Instruct? I believe that is the free Qwen3-coder model on openrouter |
|
|
|
|
|
|
| ▲ | okasaki 3 days ago | parent | prev | next [-] |
| My experience is that gpt-oss doesn't know much about obscure topics, so if you're using it for anything except puzzles or coding in popular languages, it won't do well as the bigger models. It's knowledge seems to be lacking even compared to gpt3. No idea how you'd benchmark this though. |
| |
| ▲ | xadhominemx 3 days ago | parent | next [-] | | > My experience is that gpt-oss doesn't know much about obscure topics That is the point of these small models. Remove the bloat of obscure information (address that with RAG), leaving behind a core “reasoning” skeleton. | | |
| ▲ | okasaki 3 days ago | parent [-] | | Yeah I guess. Just wanted to say the size difference might be accounted for by the model knowing more. Seems more user-friendly to bake it in. |
| |
| ▲ | easygenes 3 days ago | parent | prev [-] | | Something I was doing informally that seems very effective is asking for details about smaller cities and towns and lesser points of interest around the world. Bigger models tend to have a much better understanding and knowledge base for the more obscure places. | | |
| ▲ | scotty79 3 days ago | parent [-] | | I would really love if they figured out how to train a model that doesn't have any such knowledge baked it, but knows where to look for it. Maybe even has a clever database for that. Knowing this kind of trivia like this consistently of the top of your head is a sign of deranged mind, artificial or not. | | |
| ▲ | bigmadshoe 3 days ago | parent | next [-] | | The problem is that these models can't reason about what they do and do not know, so right now you basically need to tune it to:
1) always look up all trivia, or
2) occasionally look up trivia when it "seems complex" enough. | |
| ▲ | okasaki 3 days ago | parent | prev [-] | | Would that work as well? If I ask a big model to write like Shakespeare it just knows intuitively how to do that. If it didn't and had to look up how to do that, I'm not sure it would do a good job. |
|
|
|
|
| ▲ | petesergeant 3 days ago | parent | prev | next [-] |
| I don't think you're necessarily wrong, but your source is currently only showing a single provider. Comparing: https://openrouter.ai/openai/gpt-oss-120b and https://openrouter.ai/deepseek/deepseek-chat-v3.1 for the same providers is probably better, although gpt-oss-120b has been around long enough to have more providers, and presumably for hosters to get comfortable with it / optimize hosting of it. |
|
| ▲ | mdp2021 3 days ago | parent | prev | next [-] |
| > same intelligence as gpt-oss-120B Let's hope not, because gpt-oss-120B can be dramatically moronical. I am guessing the MoE contains some very dumb subnets. Benchmarks can be a starting point, but you really have to see how the results work for you. |
|
| ▲ | lenerdenator 3 days ago | parent | prev [-] |
| Clearly, this is a dark harbinger for Chinese AI supremacy /s |