Remix.run Logo
jillesvangurp 3 days ago

I actually think that having more cloud providers might deflate a lot of the pricing. If you think about it, companies like Amazon buy server hardware and then rent it out by the vcpu (with throttling if they can get away with it) per month. Add memory and IO and you are looking at bills that pay for the server in mere months/weeks several tenants carving up all the hardware and each paying tens/hundreds per month.

There are of course benefits to using cloud based VMs and I use them as well. But you are paying a very steep premium for what is a pitiful amount of compute and memory. There's a lot of wiggle room for price decreases and the only thing preventing that is a lack of competition. There's a reason Amazon is so rich: nobody seems to challenge them on AWS pricing. There's value in having them do all the faffing about with hardware of course. That's why companies use them. I'm in GCP; but same principle. I don't want to have to worry about replacing hard disks in the middle of the night, deal with network routers that are misbehaving, cooling issues, etc. That's why I pay them the big bucks. But I'm well aware that it's not that great of a deal.

I used Hetzner a decade ago and paid something like 50 euros per month for a quad core xeon with a raid 1 disks, 32 GB, etc. Bare metal of course. But also, 50 euro. We had five of those. Forget about getting anything close to that with modern cloud providers for anything resembling a reasonable price. Your first monthly bill might actually add up to enough to buy your own hardware. Very tempting. They have beefed up their specs since then. You now get more for less. And they also do VMs now.

throwup238 3 days ago | parent | next [-]

I knew it was bad but I didn’t realize just how bad the pricing spread can be until I started dealing with the GPU instances (8x A100 or H100 pods). Last I checked the on-demand pricing was $40/hr and the 1-year reserved instances were $25/hr. That’s over $200k/yr for the reserved instances so within two years I’d spend enough to buy my own 8x H100 pod (based on LambdaLabs pricing) plus enough to pay an engineer to babysit five pods at a time. It’s insane.

With on-demand pricing the pod would pay for itself (and the cost to manage it) within a year.

dilyevsky 4 hours ago | parent | next [-]

It's actually not that bad for GPUs considering their useful life is much shorter than regular compute. DC-grade CPU servers cost 12-24mo of typical public cloud prices but you can run em for 6-8 years.

wordpad 3 days ago | parent | prev [-]

That's just hardware. If you need to build and maintain your own devops tooling it can balloon in complexity and cost real quick.

It would still likely be much cheaper to do everything in house, but you would be assuming a lot of risks and locking yourself in losing flexibility.

There is a reason people go with AWS over many competing cheaper cloud providers.

echelon 3 days ago | parent [-]

> There is a reason people go with AWS over many competing cheaper cloud providers.

Opportunity cost.

The evolutionary fitness landscape hasn't yet provided an escape hatch for paying this premium, but in time it will.

time0ut 3 days ago | parent | prev [-]

In my experience, companies seem to want to pay the cloud provider tax in order to avoid capacity planning. Sometimes it makes sense because it is hard to predict when something is going to take off. I have also worked at companies with very predictable growth paying insane amounts. I didn’t understand the logic, but they still were profitable and paid well so whatever.