Remix.run Logo
layer8 4 hours ago

Sure, but we could have Hetzners and OVHs who just provide the compute for whatever model we want to run.

medi8r 3 hours ago | parent [-]

Checked the DDR5 price lately?

layer8 3 hours ago | parent [-]

I didn’t claim that it would be cheap. But I’d rather see the real cost of SOTA LLM use exposed. On the other hand, reportedly SOTA LLM inference is profitable nowadays, so it can’t be that expensive.