| ▲ | binary132 a day ago |
| I'm struggling to understand what workloads Meta might be running that are _this_ latency-critical. |
|
| ▲ | commandersaki a day ago | parent | next [-] |
| According to the video linked somewhere in this thread indicates WhatsApp Erlang workers that want sub-ms latency. |
|
| ▲ | Pr0Ger a day ago | parent | prev | next [-] |
| It's definitely for ads auctions |
|
| ▲ | dabockster a day ago | parent | prev | next [-] |
| It's Meta. They always push to be that fast on paper, even when it's costly to do and doesn't really need it. |
|
| ▲ | stuxnet79 a day ago | parent | prev | next [-] |
| Meta is a humongous company. Any kind of latency has to have a business impact. |
|
| ▲ | tayo42 a day ago | parent | prev [-] |
| If you have 50,000 servers for your service, and you can reduce that by 1 percent, you save 50 servers. Multiply that by maybe $8k per server and you have saved $400k,you just paid for your self for a year. With meta the numbers are probably a bit bigger. |
| |
| ▲ | binary132 14 hours ago | parent | next [-] | | yes, but latency-optimized schedulers tend to have _worse_ throughput, not better. | |
| ▲ | pixelbeat__ a day ago | parent | prev | next [-] | | LOL (I used to work for Meta, so appreciate the facetious understatement) | |
| ▲ | bongodongobob a day ago | parent | prev [-] | | That's not how it works though. Budgets are annual. A 1% savings of cpu cycles doesn't show up anywhere, it's a rounding error. They don't have a guy that pulls the servers and sells them ahead of the projection. You bought them for 5 years and they're staying. 5 years from now, that 1% got eaten up by other shit. | | |
| ▲ | Anon1096 a day ago | parent | next [-] | | You're wrong about how services that cost 9+ figures to run annually are budgeted. 1% CPU is absolutely massive and well measured and accounted for in these systems. | | |
| ▲ | bongodongobob a day ago | parent [-] | | So you prematurely dump hardware you already own when you see CPU usage go down? I don't think so. | | |
| ▲ | Anon1096 a day ago | parent [-] | | What you're missing is that for these massive systems there's never enough capacity. You can go look at datacenter buildouts YOY if you'd like. Any and all compute power that can be used is being used. For individual services what that means is that for something like Google Search there will be dozens of projects in the hopper that aren't being worked on because there's just not enough hardware to supply the feature (for example something may have been tested already at small scale and found to be good SEO ranking wise but compute expensive). So a team that is able to save 1% CPU can directly repurpose that saved capacity and fund another project. There's whole systems in place for formally claiming CPU savings and clawing back those savings to fund new efforts. |
|
| |
| ▲ | tayo42 a day ago | parent | prev [-] | | You don't buy servers once every 5 years. I've done purchasing every quarter and forecasted a year out. You reduce your services budget for hardware by the amount saved for that year. | | |
| ▲ | bongodongobob a day ago | parent | next [-] | | 5 years is the lifecycle. You're not going to get rid of a 4 year old server because you're using less cycles that you thought you would. You already bought it. You find something else for it to do or you have a little extra redundancy. If I increase the mpg of my semi fleet, that doesn't mean I can sell some of my semis off just because the cost per trip goes down. | |
| ▲ | a day ago | parent | prev [-] | | [deleted] |
|
|
|