Remix.run Logo
agentcoops 12 hours ago

I hear your argument, but short of major algorithmic breakthroughs I am not convinced the global demand for GPUs will drop any time soon. Of course I could easily be wrong, but regardless I think the most predictable cause for a drop in the NVIDIA price would be that the CHIPS act/recent decisions by the CCP leads a Chinese firm to bring to market a CUDA compatible and reliable GPU at a fraction of the cost. It should be remembered that NVIDIA's /current/ value is based on their being locked out of their second largest market (China) with no investor expectation of that changing in the future. Given the current geopolitical landscape, in the hypothetical case where a Chinese firm markets such a chip we should expect that US firms would be prohibited from purchasing them, while it's less clear that Europeans or Saudis would be. Even so, if NVIDIA were not to lower their prices at all, US firms would be at a tremendous cost disadvantage while their competitors would no longer have one with respect to compute.

All hypothetical, of course, but to me that's the most convincing bear case I've heard for NVIDIA.

laughing_man 16 minutes ago | parent | next [-]

I suspect major algorithmic breakthroughs would accelerate the demand for GPUs instead of making it fall off, since the cost to apply LLMs would go down.

coryrc 11 hours ago | parent | prev | next [-]

Not that locked out: https://www.cnbc.com/2025/12/31/160-million-export-controlle...

11 hours ago | parent [-]
[deleted]
reppap 7 hours ago | parent | prev | next [-]

People will want more GPUs but will they be able to fund them? At what points does the venture capital and loans run out? People will not keep pouring hundreds of billions into this if the returns don't start coming.

gadflyinyoureye 4 hours ago | parent [-]

Money will be interesting the next few years.

There is a real chance that the Japanese carry trade will close soon the BoJ seeing rates move up to 4%. This means liquidity will drain from the US markets back into Japan. On the US side there is going to be a lot of inflation between money printing, refund checks, amortization changes and a possible war footing. Who knows?

tracker1 7 hours ago | parent | prev | next [-]

Doesn't even necessarily need to be CUDA compatible... there's OpenCL and Vulkan as well, and likely China will throw enough resources at the problem to bring various libraries into closer alignment to ease of use/development.

I do think China is still 3-5 years from being really competitive, but still even if they hit 40-50% of NVidia, depending on pricing and energy costs, it could still make significant inroads with legal pressure/bans, etc.

bigyabai 4 hours ago | parent [-]

> there's OpenCL and Vulkan as well

OpenCL is chronically undermaintained & undersupported, and Vulkan only covers a small subset of what CUDA does so far. Neither has the full support of the tech industry (though both are supported by Nvidia, ironically).

It feels like nobody in the industry wants to beat Nvidia badly enough, yet. Apple and AMD are trying to supplement raster hardware with inference silicon; both of them are afraid to implement a holistic compute architecture a-la CUDA. Intel is reinventing the wheel with OneAPI, Microsoft is doing the same with ONNX, Google ships generic software and withholds their bespoke hardware, and Meta is asleep at the wheel. All of them hate each other, none of them trust Khronos anymore, and the value of a CUDA replacement has ballooned to the point that greed might be their only motivator.

I've wanted a proper, industry-spanning CUDA competitor since high school. I'm beginning to realize it probably won't happen within my lifetime.

zozbot234 3 hours ago | parent [-]

The modern successor to OpenCL is SYCL and there's been some limited convergence with Vulkan Compute (they're still based on distinct programming models and even SPIR-V varieties under the hood, but the distance is narrowing somewhat).

iLoveOncall 11 hours ago | parent | prev [-]

> short of major algorithmic breakthroughs I am not convinced the global demand for GPUs will drop any time soon

Or, you know, when LLMs don't pay off.

unsupp0rted 8 hours ago | parent | next [-]

Even if LLMs didn't advance at all from this point onward, there's still loads of productive work that could be optimized / fully automated by them, at no worse output quality than the low-skilled humans we're currently throwing at that work.

pvab3 7 hours ago | parent | next [-]

inference requires a fraction of the power that training does. According to the Villalobos paper, the median date is 2028. At some point we won't be training bigger and bigger models every month. We will run out of additional material to train on, things will continue commodifying, and then the amount of training happening will significantly decrease unless new avenues open for new types of models. But our current LLMs are much more compute-intensive than any other type of generative or task-specific model

SequoiaHope 40 minutes ago | parent | next [-]

Run out of training data? They’re going to put these things in humanoids (they are weirdly cheap now) and record high resolution video and other sensor data of real world tasks and train huge multimodal Vision Language Action models etc.

The world is more than just text. We can never run out of pixels if we point cameras at the real world and move them around.

I work in robotics and I don’t think people talking about this stuff appreciate that text and internet pictures is just the beginning. Robotics is poised to generate and consume TONS of data from the real world, not just the internet.

zozbot234 7 hours ago | parent | prev | next [-]

> We will run out of additional material to train on

This sounds a bit silly. More training will generally result in better modeling, even for a fixed amount of genuine original data. At current model sizes, it's essentially impossible to overfit to the training data so there's no reason why we should just "stop".

_0ffh 5 hours ago | parent | next [-]

You'd be surprised how quickly improvement of autoregressive language models levels off with epoch count (though, admittedly, one epoch is a LOT). Diffusion language models otoh indeed keep profiting for much longer, fwiw.

pvab3 6 hours ago | parent | prev [-]

I'm just talking about text generated by human beings. You can keep retraining with more parameters on the same corpus

https://proceedings.mlr.press/v235/villalobos24a.html

x-complexity 4 hours ago | parent [-]

> I'm just talking about text generated by human beings.

That in itself is a goalpost shift from

> > We will run out of additional material to train on

Where it is implied "additional material" === "all data, human + synthetic"

------

There's still some headroom left in the synthetic data playground, as cited in the paper linked:

https://proceedings.mlr.press/v235/villalobos24a.html ( https://openreview.net/pdf?id=ViZcgDQjyG )

"On the other hand, training on synthetic data has shown much promise in domains where model outputs are relatively easy to verify, such as mathematics, programming, and games (Yang et al., 2023; Liu et al., 2023; Haluptzok et al., 2023)."

With the caveat that translating this success outside of these domains is hit-or-miss:

"What is less clear is whether the usefulness of synthetic data will generalize to domains where output verification is more challenging, such as natural language."

The main bottleneck for this area of the woods will be (X := how many additional domains can be made easily verifiable). So long as (the rate of X) >> (training absorption rate), the road can be extended for a while longer.

yourapostasy 7 hours ago | parent | prev [-]

Inference leans heavily on GPU RAM and RAM bandwidth for the decode phase where an increasingly greater amount of time is being spent as people find better ways to leverage inference. So NVIDIA users are currently arguably going to demand a different product mix when the market shifts away from the current training-friendly products. I suspect there will be more than enough demand for inference that whatever power we release from a relative slackening of training demand will be more than made up and then some by power demand to drive a large inference market.

It isn’t the panacea some make it out to be, but there is obvious utility here to sell. The real argument is shifting towards the pricing.

SchemaLoad 8 hours ago | parent | prev [-]

How much of the current usage is productive work that's worth paying for vs personal usage / spam that would just drop off after usage charges come in? I imagine flooding youtube and instagram with slop videos would reduce if users had to pay fair prices to use the models.

The companies might also downgrade the quality of the models to make it more viable to provide as an ad supported service which would again reduce utilisation.

unsupp0rted 8 hours ago | parent | next [-]

For any "click here and type into a box" job for which you'd hire a low-skilled worker and give them an SOP to follow, you can have an LLM-ish tool do it.

And probably for the slightly more skilled email jobs that have infiltrated nearly all companies too.

Is that productive work? Well if people are getting paid, often a multiple of minimum wage, then it's productive-seeming enough.

greree 2 hours ago | parent [-]

Another bozo making fun of other job classes.

Why are there still customer service reps? Shouldn’t they all be gone by now due to this amazing technology?

Ah, tumbleweed.

bethekidyouwant 3 hours ago | parent | prev [-]

Who is generating videos for free?

stingraycharles 8 hours ago | parent | prev | next [-]

Exactly, the current spend on LLMs is based on extremely high expectations and the vendors operating at a loss. It’s very reasonable to assume that those expectations will not be met, and spending will slow down as well.

Nvidia’s valuation is based on the current trend continuing and even increasing, which I consider unlikely in the long term.

bigyabai 7 hours ago | parent [-]

> Nvidia’s valuation is based on the current trend continuing

People said this back when Folding@Home was dominated by Team Green years ago. Then again when GPUs sold out for the cryptocurrency boom, and now again that Nvidia is addressing the LLM demand.

Nvidia's valuation is backstopped by the fact that Russia, Ukraine, China and the United States are all tripping over themselves for the chance to deploy it operationally. If the world goes to war (which is an unfortunate likelihood) then Nvidia will be the only trillion-dollar defense empire since the DoD's Last Supper.

matthewdgreen 7 hours ago | parent [-]

China is restricting purchases of H200s. The strong likelihood is that they're doing this to promote their own domestic competitors. It may take a few years for those chips to catch up and enter full production, but it's hard to envision any "trillion dollar" Nvidia defense empire once that happens.

bigyabai 7 hours ago | parent [-]

It's very easy to envision. America needs chips, and Intel can't do most of this stuff.

zozbot234 7 hours ago | parent [-]

Intel makes GPUs.

bigyabai 6 hours ago | parent [-]

Intel's GPU designs make AMD look world-class by comparison. Outside of transcode applications, those Arc cards aren't putting up a fight.

selfhoster11 10 hours ago | parent | prev [-]

They already are paying off. The nature of LLMs means that they will require expensive, fast hardware that's a large capex.

kortilla 10 hours ago | parent | next [-]

They aren’t yet because the big providers that paid for all of this GPU capacity aren’t profitable yet.

They continually leap frog each other and shift around customers which indicates that the current capacity is already higher than what is required for what people actually pay for.

MrDarcy 9 hours ago | parent [-]

Google, Amazon, and Microsoft aren’t profitable?

notyourwork 9 hours ago | parent | next [-]

I assume the reference was AI use cases are not profitable. Those companies are subsidizing and OpenAI/grok are burning money.

lossyalgo 6 hours ago | parent | next [-]

Yeah but OpenAI is adding ads this year for the free versions, which I'm guessing is most of their users. They are probably hedging on taking a big slice of Google's advertising monopoly-pie (which is why Google is also now all-in on forcing Gemini opt-out on every product they own, they can see the writing on the wall).

onion2k 6 hours ago | parent | prev [-]

Google, Amazon, and Microsoft do a lot of things that aren't profitable in themselves. There is no reason to believe a company will kill a product line just because it makes a loss. There are plenty of other reasons to keep it running.

josefx 9 hours ago | parent | prev | next [-]

Aren't all Microsoft products OpenAI based? OpenAI has always been burning money.

wolfram74 8 hours ago | parent | prev | next [-]

Do you think it's odd you only listed companies with already existing revenue streams and not companies that started with and only have generative algos as their product?

dangus 8 hours ago | parent | prev | next [-]

How many business units have Google and Microsoft shut down or ceased investment for being unprofitable?

I hear Meta is having massive VR division layoffs…who could have predicted?

Raw popularity does not guarantee sustainability. See: Vine, WeWork, MoviePass.

8 hours ago | parent | prev [-]
[deleted]
Forgeties79 9 hours ago | parent | prev [-]

Where? Who’s in the black?