Remix.run Logo
maxloh 12 hours ago

Is the training cost really that high, though?

The Allen Institute (a non-profit) just released the Molmo 2 and Olmo 3 models. They trained these from scratch using public datasets, and they are performance-competitive with Gemini in several benchmarks [0] [1].

AMD was also able to successfully train an older version of OLMo on their hardware using the published code, data, and recipe [2].

If a non-profit and a chip vendor (training for marketing purposes) can do this, it clearly doesn't require "burning 10 years of cash flow" or a Google-scale TPU farm.

[0]: https://allenai.org/blog/molmo2

[1]: https://allenai.org/blog/olmo3

[2]: https://huggingface.co/amd/AMD-OLMo

turtlesdown11 11 hours ago | parent | next [-]

No, of course the training costs aren't that high. Apple's ten years of future free cash flow is greater than a trillion dollars (they are above $100b per year). Obviously, the training costs are a trivial amount compared to that figure.

ufmace 7 hours ago | parent | next [-]

What I'm wondering - their future cash flow may be massive compared to any conceivable rational task, but the market for servers and datacenters seems to be pretty saturated right now. Maybe, for all their available capital, they just can't get sufficient compute and storage on a reasonable schedule.

bombcar 10 hours ago | parent | prev | next [-]

I have no idea what AI involves, but "training" sounds like a one-and-done - but how is the result "stored"? If you have trained up a Gemini, can you "clone" it and if so, what is needed?

I was under the impression that all these GPUs and such were needed to run the AI, not only ingest the data.

DougBTX 8 hours ago | parent | next [-]

> but how is the result "stored"

Like this: https://huggingface.co/docs/safetensors/index

esafak 10 hours ago | parent | prev | next [-]

Yes, serving requires infra, too. But you can use infra optimized for serving; nvidia GPUs are not the only game in town.

tefkah 9 hours ago | parent | prev [-]

Theoretically it would be much less expensive to just continue to run the existing models, but ofc none of the current leaders are going to stop training new ones any time soon.

bombcar 7 hours ago | parent [-]

So are we on a hockey stick right now where a new model is so much better than the previous that you have to keep training?

Because almost every example of previous cases of things like this eventually leveled out.

amelius 9 hours ago | parent | prev [-]

Hiring the right people should also be trivial with that amount of cash.

lostmsu 9 hours ago | parent | prev | next [-]

No, I doesn't beat Gemini in any benchmarks. It beats Gemma, which isn't a SoTA even among open models of that size. That would be Nemotron 3 or GPT-OSS 20B.

PunchyHamster 6 hours ago | parent | prev [-]

my prediction is that they might switch once AI craze will simmer down to some more reasonable level