Remix.run Logo
comrade1234 9 hours ago

15-years ago or so a spreadsheet was floating around where you could enter server costs, compute power, etc and it would tell you when you would break-even by buying instead of going with AWS. I think it was leaked from Amazon because it was always three-years to break-even even as hardware changed over time.

TonyStr 8 hours ago | parent | next [-]

Azure provides their own "Total Cost of Ownership" calculator for this purpose [0]. Notably, this makes you estimate peripheral costs such as cost of having a server administrator, electricity, etc.

[0] - https://azure-int.microsoft.com/en-us/pricing/tco/calculator...

Symbiote 8 hours ago | parent [-]

I plugged in our own numbers (60 servers we own in a data centre we rent) and Microsoft thinks this costs us an order of magnitude more than it does.

Their "assumption" for hardware purchase prices seems way off compared to what we buy from Dell or HP.

It's interesting that the "IT labour" cost they estimate is $140k for DIY, and $120k for Azure.

Their saving is 5 times more than what we spend...

TonyStr 7 hours ago | parent [-]

Thank you, I've wanted to see someone use this in the real world. When doing Azure certifications (AZ900, AZ204, etc.), they force you to learn about this tool.

Symbiote 3 hours ago | parent [-]

I may be out of date with RAM prices. Dell's configuration tool wants £1000 each for 32GB RDIMMs — but prices in Dell's configuration tool are always significantly higher than we get if we write to their sales person.

Even so, a rough configuration for a 2-processor 16 core/processor server with 256GiB RAM comes to $20k, vs $22k + 100% = $44k quoted by MS. (The 100% is MS' 20%-per-year "maintenance cost" that they add on to the estimate. In reality this is 0% as everything is under Dell's warranty.)

And most importantly, the tool is only comparing the cost of Azure to constructing and maintaining a data centre! Unless there are other requirements (which would probably rule out Azure anyway) that's daft, a realistic comparison should be to colocation or hired dedicated servers, depending on the scale.

vidarh 7 hours ago | parent | prev | next [-]

If you buy, maybe. Leasing or renting tends to be cheaper from day one. Tack on migration costs and ca. 6 months is a more realistic target. If the spreadsheet always said 3 years, it sounds like an intentional "leak".

g-b-r 8 hours ago | parent | prev | next [-]

Did the AWS part include the egress costs to extract your data from AWS, if you ever want to leave them?

coreylane 7 hours ago | parent [-]

AWS says they will waive all egress costs when exiting https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-i...

direwolf20 5 hours ago | parent [-]

Because the EU forced them to

Onavo 8 hours ago | parent | prev [-]

Well, somebody should recreate it. I smell a potential startup idea somewhere. There's a ton of "cloud cost optimizers" software but most involve tweaking AWS knobs and taking a cut of the savings. A startup that could offload non critical service from AWS to colo and traditional bare metal hosting like Hetzner has a strong future.

One thing to keep in mind is that the curve for GPU depreciation (in the last 5 years at least) is a little steeper than 3 years. Current estimates is that the capital depreciation cost would plunge dramatically around the third year. For a top tier H100 depreciation kicks in around the 3rd year but they mentioned for the less capable ones like the A100 the depreciation is even worse.

https://www.silicondata.com/use-cases/h100-gpu-depreciation/

Now this is not factoring cost of labour. Labor at SF wages is dreadfully expensive, now if your data center is right across the border in Tijuana on the other hand..