▲ | jml7c5 17 hours ago | ||||||||||||||||
No, their calculation is based on a rental price of $2/hour. | |||||||||||||||||
▲ | yorwba 16 hours ago | parent [-] | ||||||||||||||||
Right, but they didn't use rented GPUs, so it's a purely notional figure. It's an appropriate value for comparison to other single training runs (e.g. it tells you that turning DeepSeek-V3 into DeepSeek-R1 cost much less than training DeepSeek-V3 from scratch) but not for the entire budget of a company training LLMs. DeepSeek spent a large amount upfront to build a cluster that they can run lots of small experiments on over the course of several years. If you only focus on the successful ones, it looks like their costs are much lower than they were end-to-end. | |||||||||||||||||
|