| ▲ | nickysielicki 2 hours ago | |
Totally matches my experience, and it feels bizarre inside-looking-out that nobody else talks about it. Hardware from 2010-2020 was remarkably stable, and CPUs are still as stable as they were, but we've had this large influx of money spent on these chips that fall over if you look at them funny. I think it leads to a lot of people thinking, "we must be doing something wrong", because it's just outside of their mental model that hardware failures can occur at this rate. But that's just the world we live in. It's a perfect storm: a lot of companies are doing HPC-style distributed computing for the first time, and lack experience in debugging issues that are unique to it. On top of that, the hardware is moving very fast and they're ill equipped to update their software and drivers at the rate required to have a good experience. On top of that, the stakes are higher because your cluster is only as strong as its weakest node, which means a single hardware failure can turn the entire multi-million dollar cluster into a paperweight, which adds more pressure and stress to get it all fixed. Updating your software means taking that same multi-million dollar cluster offline for several hours, which is seen as a cost rather than a good investment of time. And a lot of the experts in HPC-style distributed computing will sell you "supported" software, which is basically just paying for the privilege of using outdated software that lacks the bug fixes that your cards might desperately need. That model made sense in the 2010s, when linux (kernel and userspace) was less stable and you genuinely needed to lock your dependencies and let the bugs work themselves out. But that's the exact opposite of what you want to be doing in 2026. You put all of this together, and it's difficult to be confident whether the hardware is bad, or going bad, or whether it's only manifesting because they're exposed to bugs, or maybe both. Yikes, it's no fun. | ||