Remix.run Logo
nabakin 2 days ago

> Also, note that there's zero CUDA dependency. It runs entirely on Huawei chips.

That is a huge claim to make with no evidence.

I researched what you said, and I have found no statement to that effect in their paper[0], on huggingface[1], twitter[2], WeChat[3], or in their news release[4].

They only mention as a footnote in only the Chinese version of their news release that they plan to reduce inference costs with the Ascend 950 supernode when it releases[5]. The only mention of Huawei in their paper is that they validated a technique to lower interconnect bandwidth on Ascend NPUs and Nvidia GPUs[6].

[0] https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main...

[1] https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro

[2] https://xcancel.com/deepseek_ai/status/2047516922263285776

[3] https://mp.weixin.qq.com/s/8bxXqS2R8Fx5-1TLDBiEDg

[4] https://api-docs.deepseek.com/news/news260424

[5] https://api-docs.deepseek.com/zh-cn/img/v4-price.png

[6] Page 16

glenstein 2 days ago | parent | next [-]

Comments like this are why I go to the comments! I never would have thought to check.

And while I'm here I want to note that I feel there's a big misunderstanding of what is and isn't demonstrated by DeepSeek. So far as I can tell the major (and important!) innovation is reproducing near-frontier level capabilities at a fraction of the cost, but it may be the case that iterating forward at the frontier is the costly thing and is a cost borne by Western companies and that nuance seems to get lost with DeepSeek. Which is not to say that as a matter of principle that non Western companies aren't sometimes capable of jumping into the lead (Kimi has been super impressive) but if GPT/Claude/etc "only" lead at the frontier with more expensive models, that's still a moat.

kybernetikos a day ago | parent [-]

If you can get something almost as capable for a fiftieth of the price, in most cases you'll do that. You might still send a few tokens to the more expensive option for the exceptional, difficult cases, but that's maybe 10% of the tokens at most. I don't see how it'll be possible to keep spending what anthropic, openai, google etc are spending if they're only going to see the trickiest 10% of tokens.

MiiMe19 a day ago | parent [-]

Missed the point award

kybernetikos a day ago | parent [-]

Maybe I need to spell out the step that connects them - how will those companies afford to keep "iterating forward at the frontier" when they probably have a huge crash in their income coming from competition with good enough, but 1/50th the price cheaper and open models.

Iterating forward at the frontier doesn't seem like a sustainable approach if everyone else can catch up with you in 6 months.

eckr 4 hours ago | parent | prev | next [-]

I don't think this is private knowledge guessing from when and how I was told, so I feel comfortable sharing it. When I talked to some Huawei representatives, I was told DeepSeek V4 was trained entirely on Huawei chips. It's up to you whether you believe it or not, and while I see the incentives in faking these news, the blow if not true would be so massive that I don't think their representatives at large venues would be making these claims without thinking it's truly correct.

Scipio_Afri 2 days ago | parent | prev | next [-]

Thank you for this due diligence, I was just reading through the technical report and couldn’t find any references to the software stack or hardware mentioning Huawei either and came back here wondering about this comment that I had read earlier.

jari_mustonen 2 days ago | parent | prev | next [-]

Here's a note about running entirely on Huawei chips:

https://finance.yahoo.com/sectors/technology/articles/deepse...

tadfisher 2 days ago | parent | next [-]

> DeepSeek indicated that current service capacity for the V4 Pro series is constrained by a computing crunch, though pricing could fall after new clusters powered by Huawei's Ascend 950 chips come online in the second half of the year.

Only mention of Huawei in that article (as of now).

selectodude 2 days ago | parent | prev [-]

Did you read any part of the link you posted? Huawei is mentioned once and not in the context of the model being trained or currently running on Huawei chips.

vedaba 2 days ago | parent [-]

Dammit, you found my technique of “citing” sources for papers in high school...

selectodude 2 days ago | parent [-]

At least when I pulled random citations off Wikipedia I could reasonably trust whoever put it there figured it was tangentially related to what was being cited. I’m not sure I could get away with putting a literal press release that I didn’t read anywhere.

Big L for media literacy there.

chvid 2 days ago | parent | prev | next [-]

Not long ago the story was this:

DeepSeek’s next AI model delayed by attempt to use Chinese chips

https://www.ft.com/content/eb984646-6320-4bfe-a78d-a1da2274b...

czk 2 days ago | parent | prev | next [-]

They mention it uses MXFP4 quant which is a blackwell capability but it looks like this is also supported by ascend 950 series according to marketing material

kappi 2 days ago | parent | prev | next [-]

DeepSeek is planning to use Huawei extensively for inference

“Due to constraints in high-end compute capacity, the current service capacity for Pro is very limited. After the 950 supernodes are launched at scale in the second half of this year, the price of Pro is expected to be reduced significantly.”

https://x.com/jukan05/status/2047516566149816627

nabakin 2 days ago | parent [-]

Yes, that's the footnote from citation [5].

nsoonhui a day ago | parent | prev [-]

I said the same thing as you and I got summarily downvoted (https://news.ycombinator.com/item?id=47888227).

That HN is quick to upvote an unsubstantiated comment ( the grandparent one, because it aligns with the anti US bias? ) and downvote fact finding one doesn't bode too well for the community as a whole. I have seen enough how polticial ideology colors everything in my home country( Malaysia), and the decline of the country is palpable, and I don't expect to find such a thing here. We are supposed to be impassioned and rational, right ?

Render to Jesus what's due to him, ditto for Caeser.

nabakin a day ago | parent [-]

Probably because you said you used DeepSeek. People don't want to see AI in the comments and don't trust AI responses.