Remix.run Logo
skhameneh 7 hours ago

It’s hard to elaborate just how wild this model might be if it performs as claimed. The claims are this can perform close to Sonnet 4.5 for assisted coding (SWE bench) while using only 3B active parameters. This is obscenely small for the claimed performance.

Aurornis 4 hours ago | parent | next [-]

I experimented with the Q2 and Q4 quants. First impression is that it's amazing we can run this locally, but it's definitely not at Sonnet 4.5 level at all.

Even for my usual toy coding problems it would get simple things wrong and require some poking to get to it.

A few times it got stuck in thinking loops and I had to cancel prompts.

This was using the recommended settings from the unsloth repository. It's always possible that there are some bugs in early implementations that need to be fixed later, but so far I don't see any reason to believe this is actually a Sonnet 4.5 level model.

margalabargala 3 hours ago | parent | next [-]

Wonder where it falls on the Sonnet 3.7/4.0/4.5 continuum.

3.7 was not all that great. 4 was decent for specific things, especially self contained stuff like tests, but couldn't do a good job with more complex work. 4.5 is now excellent at many things.

If it's around the perf of 3.7, that's interesting but not amazing. If it's around 4, that's useful.

Kostic 3 hours ago | parent | prev | next [-]

I would not go below q8 if comparing to sonnet.

cubefox 3 hours ago | parent | prev [-]

> I experimented with the Q2 and Q4 quants.

Of course you get degraded performance with this.

Aurornis an hour ago | parent [-]

Obviously. That's why I led with that statement.

Those are the quant thresholds where people with mid-high end hardware can run this locally at reasonable speed, though.

In my experience Q2 is flakey, but Q4 isn't dramatically worse.

cmrdporcupine 2 hours ago | parent | prev | next [-]

It feels more like Haiku level than Sonnet 4.5 from my playing with it.

cirrusfan 7 hours ago | parent | prev [-]

If it sounds too good to be true…

FuckButtons 4 hours ago | parent | next [-]

There have been advances recently (last year) in scaling deep rl by a significant amount, their announcement is in line with a timeline of running enough experiments to figure out how to leverage that in post training.

Importantly, this isn’t just throwing more data at the problem in an unstructured way, afaik companies are getting as many got histories as they can and doing something along the lines of, get an llm to checkpoint pull requests, features etc and convert those into plausible input prompts, then run deep rl with something which passes the acceptance criteria / tests as the reward signal.

theshrike79 7 hours ago | parent | prev | next [-]

Should be possible with optimised models, just drop all "generic" stuff and focus on coding performance.

There's no reason for a coding model to contain all of ao3 and wikipedia =)

jstummbillig 5 hours ago | parent | next [-]

There is: It works (even if we can't explain why right now).

If we knew how to create a SOTA coding model by just putting coding stuff in there, that is how we would build SOTA coding models.

noveltyaccount 6 hours ago | parent | prev | next [-]

I think I like coding models that know a lot about the world. They can disambiguate my requirements and build better products.

regularfry 6 hours ago | parent [-]

I generally prefer a coding model that can google for the docs, but separate models for /plan and /build is also a thing.

noveltyaccount 5 hours ago | parent [-]

> separate models for /plan and /build

I had not considered that, seems like a great solution for local models that may be more resource-constrained.

regularfry 5 hours ago | parent [-]

You can configure aider that way. You get three, in fact: an architect model, a code editor model, and a quick model for things like commit messages. Although I'm not sure if it's got doc searching capabilities.

moffkalast 5 hours ago | parent | prev | next [-]

That's what Meta thought initially too, training codellama and chat llama separately, and then they realized they're idiots and that adding the other half of data vastly improves both models. As long as it's quality data, more of it doesn't do harm.

Besides, programming is far from just knowing how to autocomplete syntax, you need a model that's proficient in the fields that the automation is placed in, otherwise they'll be no help in actually automating it.

theshrike79 3 hours ago | parent [-]

But as far as I know, that was way before tool calling was a thing.

I'm more bullish about small and medium sized models + efficient tool calling than I'm about LLMs too large to be run at home without $20k of hardware.

The model doesn't need to have the full knowledge of everything built into it when it has the toolset to fetch, cache and read any information available.

MarsIronPI 6 hours ago | parent | prev | next [-]

But... but... I need my coding model to be able to write fanfiction in the comments...

wongarsu an hour ago | parent | prev [-]

Now I wonder how strong the correlation between coding performance and ao3 knowledge is in human programmers. Maybe we are on to something here /s

Der_Einzige 4 hours ago | parent | prev [-]

It literally always is. HN Thought DeepSeek and every version of Kimi would finally dethrone the bigger models from Anthropic, OpenAI, and Google. They're literally always wrong and average knowledge of LLMs here is shockingly low.

cmrdporcupine 2 hours ago | parent [-]

Nobody has been saying they'd be dethroned. We're saying they're often "good enough" for many use cases, and that they're doing a good job of stopping the Big Guys from creating a giant expensive moat around their businesses.

Chinese labs are acting as a disruption against Altman etcs attempt to create big tech monopolies, and that's why some of us cheer for them.