Remix.run Logo
throwup238 3 days ago

I knew it was bad but I didn’t realize just how bad the pricing spread can be until I started dealing with the GPU instances (8x A100 or H100 pods). Last I checked the on-demand pricing was $40/hr and the 1-year reserved instances were $25/hr. That’s over $200k/yr for the reserved instances so within two years I’d spend enough to buy my own 8x H100 pod (based on LambdaLabs pricing) plus enough to pay an engineer to babysit five pods at a time. It’s insane.

With on-demand pricing the pod would pay for itself (and the cost to manage it) within a year.

dilyevsky 4 hours ago | parent | next [-]

It's actually not that bad for GPUs considering their useful life is much shorter than regular compute. DC-grade CPU servers cost 12-24mo of typical public cloud prices but you can run em for 6-8 years.

wordpad 3 days ago | parent | prev [-]

That's just hardware. If you need to build and maintain your own devops tooling it can balloon in complexity and cost real quick.

It would still likely be much cheaper to do everything in house, but you would be assuming a lot of risks and locking yourself in losing flexibility.

There is a reason people go with AWS over many competing cheaper cloud providers.

echelon 3 days ago | parent [-]

> There is a reason people go with AWS over many competing cheaper cloud providers.

Opportunity cost.

The evolutionary fitness landscape hasn't yet provided an escape hatch for paying this premium, but in time it will.