Remix.run Logo
A few words on DS4(antirez.com)
103 points by caust1c 3 hours ago | 33 comments
FuckButtons 9 minutes ago | parent | next [-]

It’s shocking how close this feels to claude, obviously it's much slower, but I don’t know that it’s significantly dumber. Interestingly the imatrix quantization seems to be better than whatever quant the zdr inference backends on open router are using. It was self aware enough yesterday to realize that it’s own server process was itself without me telling it, which is not something I’ve ever observed a local model doing before.

minimaxir an hour ago | parent | prev | next [-]

A relevant recent tweet from antirez: https://x.com/antirez/status/2054854124848415211

> Gentle reminder on how, in the recent DS4 fiesta, not just me but every other contributor found GPT 5.5 able to help immensely and Opus completely useless.

I've noticed the same for lower level squeezing-as-much-performance-as-possible code work.

throwaway041207 23 minutes ago | parent [-]

Assuming we are talking about Code/Codex are you on API billing or subscription? I have essentially unlimited API billing at my disposal and I haven't noticed any degradation of quality across Opus versions.

sbinnee an hour ago | parent | prev | next [-]

It is a big thing for sure to have a competitive local agentic model. I've replaced gemini 3 flash preview with DeepSeek v4 flash for all of my personal use cases. Starting from chat app, language learning, and even hobby coding. For coding, I couldn't get decent results no matter which sota latest models I used before. It's not close to Opus or Codex models. It's a flash model and makes mistakes here and there (I just saw `from opentele while import trace`, new Python syntax!)

But I found its tool calling is reliable than other oss models I tried. I assume that it attributes to interleaved thinking. Its reasoning effort is adjusted automatically by queries. I enjoy reading these reasoning traces from open models because you can't see them from proprietary models.

I would love to try DS4 so bad. Well, I don't have a machine for it. I will just stick to openrouter. I wish I can run a competitive oss model on 32GB machine in 3 years.

kristianp 19 minutes ago | parent | next [-]

> I wish I can run a competitive oss model on 32GB machine in 3 years.

It's so hard to predict what size the open-weight models will be, even in 6 months time. Will a 96GB machine turn out to be a complete waste of money? Who knows.

zozbot234 an hour ago | parent | prev | next [-]

> I wish I can run a competitive oss model on 32GB machine in 3 years.

You could try DS4 on that machine anyway and see how gracefully it degrades (assuming that it runs and doesn't just OOM immediately). Experimenting with 36GB/48GB/64GB would also be nice; they might be able to gain some compute throughput back by batching multiple sessions together (though obviously at the expense of speed for any single session).

thegeomaster 40 minutes ago | parent | prev [-]

> `from opentele while import trace`

FYI, this to me points to an inference bug, bad sampling, or a non-native quant. OpenRouter is known to route requests to absolutely terrible, borked implementations. A model like DeepSeek V4 Flash shouldn't be making syntax errors like this.

0xbadcafebee an hour ago | parent | prev | next [-]

I don't see an explanation of why they would make a model-specific inference engine vs just using llamacpp. There are already lots of people working on the llamacpp integration. This is a lot of effort spent on a single model which is likely to become obsolete when a different model comes out that does better. In some discussions, people are now making PRs against both the llamacpp branches and ds4... so it's taking a rare commodity (people investing development time in this model) and fragmenting it

fgfarben 3 minutes ago | parent | next [-]

At a certain point the level of abstraction / genericization necessary for a big flexible project (like llama.cpp or Linux) blows things up into a huge number of files. Something newer and smaller can move faster.

zozbot234 an hour ago | parent | prev | next [-]

Author has mentioned many times that the llama.cpp maintainers don't want code that's prevalently written by AI with no human revision. If anyone wants to try and get the support upstreamed into that project, they're quite free to do that: the code is MIT licensed.

kristianp 21 minutes ago | parent [-]

Also Antirez has been able to use GPT to iterate on the code and performance. He/they (others contributed to DS4) has a set of result files to ensure that correctness is maintained, and benchmarks to verify performance, and the LLM is able to iterate within that framework. Having a small, focussed codebase helps here.

Antirez explained the dev process when he posted a pure C implementation of the Flux 2 Klein image gen model, at https://news.ycombinator.com/item?id=46670279

flakiness an hour ago | parent | prev [-]

I believe the assumption is: The code is cheap. The collaboration (eg. upstreaming) is expensive.

Is it true? We'll see, in a few years.

simonw 2 hours ago | parent | prev | next [-]

I got this running on a 128GB M5 the other day - pretty painless, model runs in about 80GB of RAM and it seemed to be very capable at writing code and tool execution.

perfmode 2 hours ago | parent [-]

How’s the token throughput / response time?

simonw 2 hours ago | parent [-]

Healthy!

  prefill: 30.91 t/s, generation: 29.58 t/s
From https://gist.github.com/simonw/31127f9025845c4c9b10c3e0d8612...
embedding-shape an hour ago | parent | next [-]

Comparison with a RTX Pro 6000, with DeepSeek-V4-Flash-IQ2XXS-w2Q2K-AProjQ8-SExpQ8-OutQ8-chat-v2-imatrix.gguf:

prefill: 121.76 t/s, generation: 47.85 t/s

Main target seems to be Apple's Metal, so makes sense. Might be fun to see how fast one could make it go though :) The model seems really good too, even though it's in IQ2.

xienze an hour ago | parent | prev [-]

I don't want to be a jerk but 31t/s prefill is basically unusable in an agentic situation. A mere 10k in context and you're sitting there for 5+ minutes before the first token is generated.

fgfarben a few seconds ago | parent | next [-]

That prefill number isn't right. M4 Max hits 200-300: https://github.com/antirez/ds4/blob/main/speed-bench/m4_max_...

aiscoming an hour ago | parent | prev [-]

if it's just the coding agent system prompt and tools, you can cache that

xienze an hour ago | parent [-]

Yeah the problem is that's just the start of the context. There's, you know, all the tool call results and file reads and stuff.

bjconlan 2 hours ago | parent | prev | next [-]

This is great! I feel the same way about the deepseek v4 architecture for commodity hardware.

Also have enjoyed playing with https://huggingface.co/HuggingFaceTB/nanowhale-100m-base (but early days for me understanding this space)

kamranjon an hour ago | parent [-]

Very cool! I had no idea that HF was doing this - I really love their small model experiments.

kamranjon 2 hours ago | parent | prev | next [-]

Just want to mention that I've been pulling down and using DwarfStar locally and it's incredible. I actually have it running on my personal macbook m4 max with 128gb of ram and I am running the server to share it through tailscale with my work laptop and just have pi running there.

The long context reasoning is something I haven't even seen in frontier models - I was running at 124k tokens earlier and it was still just buzzing along with no issues or fatigue.

I am amazed at how well it works, I'm using it right now for some pretty complex frontend work, and it is much much faster than, for example running a dense 27b or 31b model (like qwen or gemma) for me (The benefits of MoE) - but the long context capabilities have been what have been absolutely flooring me.

Super excited about this project and hope Antirez can keep himself from burning out - i've been following the repo pretty closely and there are a ton of PR's flooding in and it seems like he's had to do a lot of filtering out of slop code.

le-mark an hour ago | parent [-]

Is DS4 dwarf star 4 or deep seek 4?

kamranjon an hour ago | parent | next [-]

Just updated! Sorry I meant Dwarf Star - it's the only way I've actually managed to run DeepSeek flash on my local hardware

wolttam an hour ago | parent | prev [-]

DwarfStar 4 is DeepSeek 4 (check the repo)

codedokode an hour ago | parent | prev | next [-]

I thought DeepSeek was closed-weights and proprietary? I wonder how it compares against Western open-weight models. The hugging face page contains the comparison only with proprietary models for some reason.

itishappy an hour ago | parent | next [-]

DeepSeek has always been open-weight, and the DeepSeek HuggingFace page does not contain any comparisons. Where did you form these opinions?

codedokode an hour ago | parent [-]

It contains comparisons: https://huggingface.co/deepseek-ai/DeepSeek-V4-Flash

zozbot234 an hour ago | parent | prev [-]

Nemotron would be a comparable Western open model AIUI.

brcmthrowaway 36 minutes ago | parent | prev [-]

This guy is falling deep into Yegge-tier psychosis.

linkregister 8 minutes ago | parent [-]

Empirically, DS4 is hosting the DeepSeek v4 Flash model with good performance on home hardware. I'm curious how you came to this conclusion.

dakolli 4 minutes ago | parent [-]

"Empirically", have you tested this yourself?