Remix.run Logo
ndriscoll 9 hours ago

Why does this always get asserted? It's trivial to do (reserve the cost when you allocate a resource [0]), and takes 2 minutes of thinking about the problem to see an answer if you're actually trying to find one instead of trying to find why you can't.

Data transfer can be pulled into the same model by having an alternate internet gateway model where you pay for some amount of unmetered bandwidth instead of per byte transfer, as other providers already do.

[0] https://news.ycombinator.com/item?id=45880863

kccqzy 9 hours ago | parent [-]

Reserving the cost until the end of the billing cycle is super unfriendly for spiky traffic and spiky resource usage. And yet one of the main selling points of the cloud is elasticity of resources. If your load is fixed, you wouldn’t even use the cloud after a five minute cost comparison. So your solution doesn’t work for the intended customers of the cloud.

ndriscoll 9 hours ago | parent [-]

It works just fine. No reason you couldn't adjust your billing cap on the fly. I work in a medium size org that's part of a large one, and we have to funnel any significant resource requests (e.g. for more EKS nodes) through our SRE teams anyway to approve.

Actual spikey traffic that you can't plan for or react to is something I've never heard of, and believe is a marketing myth. If you find yourself actually trying to suddenly add a lot of capacity, you also learn that the elasticity itself is a myth; the provisioning attempt will fail. Or e.g. lambda will hit its scaling rate limit way before a single minimally-sized fargate container would cap out.

If you don't mind the risk, you could also just not set a billing limit.

The actual reason to use clouds is for things like security/compliance controls.

kccqzy 8 hours ago | parent [-]

I think I am having some misunderstanding about exactly how this cost control works. Suppose that a company in the transportation industry needs 100 CPUs worth of resources most of the day and 10,000 CPUs worth of resources during morning/evening rush hours. How would your reserved cost proposal work? Would it require having a cost cap sufficient for 10,000 CPUs for the entire day? If not, how?

ndriscoll 8 hours ago | parent [-]

10,000 cores is an insane amount of compute (even 100 cores should already be able to easily deal with millions of events/requests per second), and I have a hard time believing a 100x diurnal difference in needs exists at that level, but yeah, actually I was suggesting that they should have their cap high enough to cover 10,000 cores for the remainder of the billing cycle. If they need that 10,000 for 4 hours a day, that's still only a factor of 6 of extra quota, and the quota itself 1. doesn't cost them anything and 2. is currently infinity.

I also expect that in reality, if you regularly try to provision 10,000 cores of capacity at once, you'll likely run into provisioning failures. Trying to cost optimize your business at that level at the risk of not being able to handle your daily needs is insane, and if you needed to take that kind of risk to cut your compute costs by 6x, you should instead go on-prem with full provisioning.

Having your servers idle 85% of the day does not matter if it's cheaper and less risky than doing burst provisioning. The only one benefiting from you trying to play utilization optimization tricks is Amazon, who will happily charge you more than those idle servers would've cost and sell the unused time to someone else.