| ▲ | more_corn 13 hours ago | |
Except you don’t build a data center, you add a GPU to an individual starlink node. If you can do that a couple hundred or thousand times you’ve got a lot of compute in space. The next question is how would you redesign compute around your distributed power and cooling profiles? The article doesn’t talk about the actual engineering challenges. (Such as scaling down the radiative cooling design, matching compute node to the maximum feasible power profile, etc) I’m not arguing it’ll be easy or will ultimately work, but articles like this are unhelpful because they don’t address the fundamental insight being proposed. | ||
| ▲ | ianburrell 5 hours ago | parent [-] | |
OpenAI has over 1 million GPU. Starlink satellites would be pointless for doing computation because they are spread across the Earth resulting in horrible latency. AI companies spend lots of money on super fast connects within a datacenter. Starlink with GPU might have some advantage for running edge GPU. But most Starlink customers are close to ground station and it makes a lot more sense to have GPUs there. It is a lot easier to manage them than launching new satellites which could take years. | ||