| ▲ | nunez 2 hours ago | |
Precisely why every bigco is spending $$$$$ buying/reusing GPUs to build their own inference serving stack based on open-source models (usually gpt-oss or one of the LLaMa variants; many bigcos in the US cannot run PRC models). That and having more control over data locality. Those same companies are getting sweetheart deals with the frontier AI labs in the hope that infrastructure costs go down enough in the future to invert profitability, but it's still a risky position for them to be in. (Having their own infrastructure gives the bigcos huge leverage, even if it's only 80% as good as frontier.) | ||