Remix.run Logo
wirybeige 2 hours ago

Pricing for DeepSeek V4 flash is $0.14 in/$0.28 out across basically every provider or close to it. It seems most providers just follow the model creator and set their prices to match. V4 pro was set to be $1.74 in/$3.48 out when DeepSeek first announced it; all providers have set their prices to be about that price, & now DeepSeek has set their pricing to $0.435 in/ $0.87 out. I don't know if this is special pricing, or the promise they made for dropping the price when they get more Huawei cards online. It seems that providers like ParaSail, Together, and Novita just set the price when the model comes out and don't compete.

philistine 2 hours ago | parent [-]

No one has yet to turn a profit from LLMs. I don't understand why we need to intently look at everybody's pricing, when the most important number is instead their losses. That is the number that tells us what they're really doing.

wirybeige an hour ago | parent | next [-]

Why would these 3rd-party providers be taking losses? Together, Novita, etc... are not losing money on inference services, they are profiting. You can easily do napkin math with current & last gen Nvidia cards to calculate cost to host/serve these models. I would also doubt that any 1st-party providers like OpenAI and Anthropic lose money on per token billing. There is almost undoubtedly healthy margin being made on that.

nickthegreek an hour ago | parent | prev [-]

OpenRouter isnt turning a profit?