| ▲ | alexgartrell 3 hours ago | |||||||
For the peanut gallery more: I worked with both of these guys at Meta on this. The "servers are only on for a few hours" thing was like never true so I have no idea where that claim is coming from. The web performance test took more than a few hours to run alone and we had way more aggressive soaks for other workloads. My recollection was that "write zeroes" just became a cheaper operation between '12 and '14. A fun fact to distract from the awkwardness: a lot of the kernel work done in the early days was exceedingly scrappy. The port mapping stuff for memcached UDP before SO_REUSEPORT for example. FB binaries couldn't even run on vanilla linux a lot of the time. Over the next several years we put a TON of effort in getting as close to mainline as possible and now Meta is one of the biggest drivers of Linux development. | ||||||||
| ▲ | adsharma 2 hours ago | parent | next [-] | |||||||
[ Edit: "servers" in this context meant the HHVM server processes, not the physical server which of course had a longer uptime ] People got promoted for continuous deployment https://engineering.fb.com/2017/08/31/web/rapid-release-at-m... I think it's fair to say the hardware changed, the deployment strategy changed and the patches were no longer relevant, so we stopped applying them. When I showed up, there were 100+ patches on top of a 2009 kernel tree. I reduced the size to about 10 or so critical patches, rebased them at a 6 months cadence over 2-3 years. Upstreamed a few. Didn't go around saying those old patches were bad ideas and I got rid of them. How you say it matters. | ||||||||
| ||||||||
| ▲ | eduction 33 minutes ago | parent | prev [-] | |||||||
I use Facebook and Instagram and think you all suck. Slagging each other in public. Grow tf up. | ||||||||