Remix.run Logo
audunw 5 hours ago

Models don’t get old as fast as they used to. A lot of the improvements seem to go into making the models more efficient, or the infrastructure around the models. If newer models mainly compete on efficiency it means you can run older models for longer on more efficient hardware while staying competitive.

If power costs are significantly lower, they can pay for themselves by the time they are outdated. It also means you can run more instances of a model in one datacenter, and that seems to be a big challenge these days: simply building an enough data centres and getting power to them. (See the ridiculous plans for building data centres in space)

A huge part of the cost with making chips is the masks. The transistor masks are expensive. Metal masks less so.

I figure they will eventually freeze the transistor layer and use metal masks to reconfigure the chips when the new models come out. That should further lower costs.

I don’t really know if this makes sanse. Depends on whether we get new breakthroughs in LLM architecture or not. It’s a gamble essentially. But honestly, so is buying nvidia blackwell chips for inference. I could see them getting uneconomical very quickly if any of the alternative inference optimised hardware pans out

FieryTransition an hour ago | parent | next [-]

From my own experience, models are at the tipping point for being useful at prototypes in software, and those are very large frontier models not feasible to get down on wafers unless someone does something smart.

I really don't like the hallucination rate for most models but it is improving, so that is still far in the future.

What I could see though, is if the whole unit they made would be power efficient enough to run on a robotics platform for human computer interaction.

It makes sense they would try to make repurposing their tech as much as they could since making changes is frought with a long time frame and risk.

But if we look long term and pretend that they get it to work, they just need to stay afloat until better smaller models can be made with their technology, so it becomes a waiting game for investors and a risk assessment.

johnsimer 4 hours ago | parent | prev [-]

“ Models don’t get old as fast as they used to”

^^^ I think the opposite is true

Anthropic and OpenAI are releasing new versions every 60-90 days it seems now, and you could argue they’re going to start releasing even faster

robotpepi 3 hours ago | parent [-]

Are they becoming better at the same rate as before though?

FieryTransition an hour ago | parent [-]

In my unscientific experience, yes, but being better at a certain rate is hard to really quantify, unless you just pull some random benchmark numbers.