Remix.run Logo
throwaw12 2 hours ago

> The math and coding part is impressive but the agentic one is not.

I think this is very important to eventually become a viable replacement for coding models. Because most of the time coding harnesses are leveraging tool calls to gather the context and then write a solution.

I am hopeful, that one day we can replace Claude and OpenAI models with local SOTA LLMs

2ndorderthought 2 hours ago | parent | next [-]

It's pretty close already. Check qwen3.6 27b if you haven't already. People are vibe and agentic coding with it on a single GPU.

It is more finicky than Claude but if you hand hold it a bit it's crazy.

iugtmkbdfil834 a minute ago | parent | next [-]

Eh. It is good in terms of results ( accuracy, good recommendations and so on ), but slow when it comes to actual inference. On local 128gb machine, it took over 5 minutes to brainstorm garage door opening mechanism with some additional restrictions for spice.

gchamonlive an hour ago | parent | prev [-]

I see that going around, and either the test cases are too simplistic or I'm doing something wrong. I have a server with a 3090 in it, enough to run qwen3.6, but I haven't had much luck using it with either codex or oh-my-pi. They work, but the model gets really slow with ~64k context and the attention degrades quickly. You'll sometimes execute a prompt, the model will load a test file and say something like "I was presented with a test file but no command. What should I do with it?".

So yeah, while it's true that qwen3.6 is good for agentic coding, it's not very good for exploring the codebase and coming up with plans. You need to pair it today with a model capable of ingesting the whole context and providing a detailed plan, and even then the implementation might take 10x the amount of time it'd take for sonnet or Gemini 3 to crunch through the plan.

nixon_why69 6 minutes ago | parent | next [-]

Qwen3.6 supports 266k context out of the box. Try using q8 kv cache to enable more of it.

dminik 22 minutes ago | parent | prev | next [-]

Yeah. Context size matters a lot. With OpenCode dumping like 10k tokens in the system prompt it takes like 4 rounds before it had to compact at say 64k. It's not really worth it to run at anything below 100k and even then the models aren't all that useful.

They're also pretty terrible at summarization. Pretty much always some file read or write in the middle of the task would cross the context margin and it would mark it as completed in the summary. I think leaving the first prompt as well as the last few turns intact would improve this issue quite a lot, but at low context sizes thats pretty much the whole context ...

pferdone 32 minutes ago | parent | prev | next [-]

I can see that and I don't know your setup, but there are people pushing >70t/s with MTP on a single 3090, with big contexts still >50t/s. 64k is not a lot for agentic coding, and IIRC 128k with turboquant and the likes should be possible for you. r/LocalLLM/ and r/LocalLLaMA/ are worth a visit IMO.

EDIT: just found this recipe repo, may wanna give it a go: https://github.com/noonghunna/club-3090

EDIT-2: this can also shave off a lot of context need for tool calling -> https://github.com/rtk-ai/rtk

embedding-shape 40 minutes ago | parent | prev | next [-]

You're not sharing what quantization you're using, in my experience, anything below Q8 and less than ~30B tends to basically be useless locally, at least for what you typically use codex et al for, I'm sure it works for very simple prompts.

But as soon as you go below Q8, the models get stuck in repeating loops, get the tool calling syntax wrong or just starts outputting gibberish after a short while.

2ndorderthought an hour ago | parent | prev | next [-]

I agree for planning it's not there yet. But I wouldn't be surprised if something came out that was in a similar weight class.

regexorcist 44 minutes ago | parent | prev [-]

Try oh-my-openagent plan mode.

steveharing1 2 hours ago | parent | prev [-]

That's absolutely possible, its just as we move towards more advancement, We'll soon see Small models being smart enough to not be judged by parameter count but their reasoning and intelligence. You can see examples like Qwen 3.6 27B.

regexorcist an hour ago | parent [-]

Yeah this is key, a lot of people are still just looking at the number of params and thinking these models are toys. What Qwen 3.6 has shown is that reasoning and tool calling are just as important if not more.