| ▲ | hedora 2 hours ago | |
Single event upsets are already commonplace at sea level well below data center scale. The section of the article that talks about them isn’t great. At least for FPGAs, the state of the art is to run 2-3 copies of the logic, and detect output discrepancies before they can create side effects. I guess you could build a GPU that way, but it’d have 1/3 the parallelism as a normal one for the same die size and power budget. The article says it’d be a 2-3 order of magnitude loss. It’s still a terrible idea, pf course. | ||