| ▲ | hodgehog11 4 days ago |
| For reference, here is the terminal-bench leaderboard: https://www.tbench.ai/leaderboard Looks like it doesn't get close to GPT-5, Claude 4, or GLM-4.5, but still does reasonably well compared to other open weight models. Benchmarks are rarely the full story though, so time will tell how good it is in practice. |
|
| ▲ | segmondy 4 days ago | parent | next [-] |
| garbage benchmark, inconsistent mix of "agent tools" and models. if you wanted to present a meaningful benchmark, the agent tools will stay the same and then we can really compare the models. there are plenty of other benchmarks that disagree with these, with that said. from my experience most of these benchmarks are trash. use the model yourself, apply your own set of problems and see how well it fairs. |
| |
|
| ▲ | guluarte 4 days ago | parent | prev | next [-] |
| tbh companies like anthopic, openai, create custom agents for specific benchmarks |
| |
| ▲ | bazmattaz 4 days ago | parent | next [-] | | Do you have a source for this? I’m intrigued | | | |
| ▲ | amelius 4 days ago | parent | prev [-] | | Aren't good benchmarks supposed to be secret? | | |
| ▲ | wkat4242 4 days ago | parent | next [-] | | This industry is currently burning billions a month. With that much money around I don't think any secrets can exist. | |
| ▲ | noodletheworld 3 days ago | parent | prev [-] | | How can a benchmark be secret if you post it to an API to test a model on it? "We totally promise that when we run your benchmark against our API we won't take the data from it and use to be better at your benchmark next time" :P If you want to do it properly you have to avoid any 3rd party hosted model when you test your benchmark, which means you can't have GPT5, claude, etc. on it; and none of the benchmarks want to be 'that guy' who doesn't have all the best models on it. So no. They're not secret. | | |
| ▲ | dmos62 3 days ago | parent [-] | | How do you propose that would work? A pipeline that goes through query-response pairs to deduce response quality and then uses the low-quality responses for further training? Wouldn't you need a model that's already smart enough to tell that previous model's responses weren't smart enough? Sounds like a chicken and egg problem. | | |
| ▲ | irthomasthomas 3 days ago | parent [-] | | It just means that once you send your test questions to a model API, that company now has your test. So 'private' benchmarks take it on faith that the companies won't look at those requests and tune their models or prompts to beat them. | | |
| ▲ | dmos62 3 days ago | parent | next [-] | | Sounds a bit presumptious to me. Sure, they have your needle, but they also need a cost-efficient way to find it in their hay stack. | | |
| ▲ | lucianbr 3 days ago | parent | next [-] | | They have quite large amounts of money. I don't think they need to be very cost-efficient. And they also have very smart people, so likely they can figure out a somewhat cost-efficient way. The stakes are high, for them. | |
| ▲ | noodletheworld 3 days ago | parent | prev [-] | | Security through obscurity is not security. Your api key is linked to your credit card, which is linked to your identity. …but hey, youre right. Lets just trust them not to be cheating. Cool. |
| |
| ▲ | merelysounds 3 days ago | parent | prev [-] | | Would the model owners be able to identify the benchmarking session among many other similar requests? | | |
| ▲ | irthomasthomas 3 days ago | parent [-] | | Depends. Something like arc-agi might be easy as it follows a defined format. I would also guess that the usage pattern for someone running a benchmark will be quite distinct from that of a normal user, unless they take specific measures to try to blend in. |
|
|
|
|
|
|
|
| ▲ | YetAnotherNick 4 days ago | parent | prev | next [-] |
| Depends on the agent. Rank 5 and 15 are claude 4 sonnet, and this stands close to 15th. |
|
| ▲ | coliveira 4 days ago | parent | prev | next [-] |
| My personal experience is that it produces high quality results. |
| |
| ▲ | amrrs 4 days ago | parent | next [-] | | Any example or prompt you use to make this statment? | | |
| ▲ | imachine1980_ 4 days ago | parent | next [-] | | I remember asking for quotes about the Spanish conquest of South America because I couldn't remember who said a specific thing. The GPT model started hallucinating quotes on the topic, while DeepSeek responded with, "I don't know a quote about that specific topic, but you might mean this other thing." or something like that then cited a real quote in the same topic, after acknowledging that it wasn't able to find the one I had read in an old book.
i don't use it for coding, but for things that are more unique i feel is more precise. | | |
| ▲ | mycall 4 days ago | parent | next [-] | | I wonder if Conway's law is at all responsible for that, in the similarity it is based on; regional trained data which has concept biases which it sends back in response. | |
| ▲ | valtism 4 days ago | parent | prev [-] | | Was that true for GPT-5? They claim it is much better at not hallucinating |
| |
| ▲ | sync 4 days ago | parent | prev [-] | | I'm doing coreference resolution and this model (w/o thinking) performs at the Gemini 2.5-Pro level (w/ thinking_budget set to -1) at a fraction of the cost. | | |
| ▲ | antman 3 days ago | parent | next [-] | | Nice point. How did you test for coreference resolution? Specific prompt or dataset? | |
| ▲ | dr_dshiv 4 days ago | parent | prev [-] | | Strong claim there! |
|
| |
| ▲ | SV_BubbleTime 4 days ago | parent | prev [-] | | Vine is about the only benchmark I think is real. We made objective systems turn out subjective answers… why the shit would anyone think objective tests would be able to grade them? |
|
|
| ▲ | seunosewa 4 days ago | parent | prev | next [-] |
| The DeepSeek R1 in that list is the old model that's been replaced.
Update: Understood. |
| |
| ▲ | yorwba 4 days ago | parent [-] | | Yes, and 31.3% is given in the announcement as the performance of the new v3.1, which would put it in sixteenth place. | | |
|
|
| ▲ | tonyhart7 4 days ago | parent | prev [-] |
| Yeah but the pricing is insane, I don't care about SOTA if its not break my bank |