Remix.run Logo
_fat_santa 13 hours ago

This article goes more into the technical analysis of the stock rather than the underlying business fundamentals that would lead to a stock dump.

My 30k ft view is that the stock will inevitably slide as AI datacenter spending goes down. Right now Nvidia is flying high because datacenters are breaking ground everywhere but eventually that will come to an end as the supply of compute goes up.

The counterargument to this is that the "economic lifespan" of an Nvidia GPU is 1-3 years depending on where it's used so there's a case to be made that Nvidia will always have customers coming back for the latest and greatest chips. The problem I have with this argument is that it's simply unsustainable to be spending that much every 2-3 years and we're already seeing this as Google and others are extending their depreciation of GPU's to something like 5-7 years.

agentcoops 12 hours ago | parent | next [-]

I hear your argument, but short of major algorithmic breakthroughs I am not convinced the global demand for GPUs will drop any time soon. Of course I could easily be wrong, but regardless I think the most predictable cause for a drop in the NVIDIA price would be that the CHIPS act/recent decisions by the CCP leads a Chinese firm to bring to market a CUDA compatible and reliable GPU at a fraction of the cost. It should be remembered that NVIDIA's /current/ value is based on their being locked out of their second largest market (China) with no investor expectation of that changing in the future. Given the current geopolitical landscape, in the hypothetical case where a Chinese firm markets such a chip we should expect that US firms would be prohibited from purchasing them, while it's less clear that Europeans or Saudis would be. Even so, if NVIDIA were not to lower their prices at all, US firms would be at a tremendous cost disadvantage while their competitors would no longer have one with respect to compute.

All hypothetical, of course, but to me that's the most convincing bear case I've heard for NVIDIA.

laughing_man 18 minutes ago | parent | next [-]

I suspect major algorithmic breakthroughs would accelerate the demand for GPUs instead of making it fall off, since the cost to apply LLMs would go down.

coryrc 11 hours ago | parent | prev | next [-]

Not that locked out: https://www.cnbc.com/2025/12/31/160-million-export-controlle...

11 hours ago | parent [-]
[deleted]
reppap 7 hours ago | parent | prev | next [-]

People will want more GPUs but will they be able to fund them? At what points does the venture capital and loans run out? People will not keep pouring hundreds of billions into this if the returns don't start coming.

gadflyinyoureye 4 hours ago | parent [-]

Money will be interesting the next few years.

There is a real chance that the Japanese carry trade will close soon the BoJ seeing rates move up to 4%. This means liquidity will drain from the US markets back into Japan. On the US side there is going to be a lot of inflation between money printing, refund checks, amortization changes and a possible war footing. Who knows?

tracker1 7 hours ago | parent | prev | next [-]

Doesn't even necessarily need to be CUDA compatible... there's OpenCL and Vulkan as well, and likely China will throw enough resources at the problem to bring various libraries into closer alignment to ease of use/development.

I do think China is still 3-5 years from being really competitive, but still even if they hit 40-50% of NVidia, depending on pricing and energy costs, it could still make significant inroads with legal pressure/bans, etc.

bigyabai 4 hours ago | parent [-]

> there's OpenCL and Vulkan as well

OpenCL is chronically undermaintained & undersupported, and Vulkan only covers a small subset of what CUDA does so far. Neither has the full support of the tech industry (though both are supported by Nvidia, ironically).

It feels like nobody in the industry wants to beat Nvidia badly enough, yet. Apple and AMD are trying to supplement raster hardware with inference silicon; both of them are afraid to implement a holistic compute architecture a-la CUDA. Intel is reinventing the wheel with OneAPI, Microsoft is doing the same with ONNX, Google ships generic software and withholds their bespoke hardware, and Meta is asleep at the wheel. All of them hate each other, none of them trust Khronos anymore, and the value of a CUDA replacement has ballooned to the point that greed might be their only motivator.

I've wanted a proper, industry-spanning CUDA competitor since high school. I'm beginning to realize it probably won't happen within my lifetime.

zozbot234 3 hours ago | parent [-]

The modern successor to OpenCL is SYCL and there's been some limited convergence with Vulkan Compute (they're still based on distinct programming models and even SPIR-V varieties under the hood, but the distance is narrowing somewhat).

iLoveOncall 11 hours ago | parent | prev [-]

> short of major algorithmic breakthroughs I am not convinced the global demand for GPUs will drop any time soon

Or, you know, when LLMs don't pay off.

unsupp0rted 8 hours ago | parent | next [-]

Even if LLMs didn't advance at all from this point onward, there's still loads of productive work that could be optimized / fully automated by them, at no worse output quality than the low-skilled humans we're currently throwing at that work.

pvab3 7 hours ago | parent | next [-]

inference requires a fraction of the power that training does. According to the Villalobos paper, the median date is 2028. At some point we won't be training bigger and bigger models every month. We will run out of additional material to train on, things will continue commodifying, and then the amount of training happening will significantly decrease unless new avenues open for new types of models. But our current LLMs are much more compute-intensive than any other type of generative or task-specific model

SequoiaHope 41 minutes ago | parent | next [-]

Run out of training data? They’re going to put these things in humanoids (they are weirdly cheap now) and record high resolution video and other sensor data of real world tasks and train huge multimodal Vision Language Action models etc.

The world is more than just text. We can never run out of pixels if we point cameras at the real world and move them around.

I work in robotics and I don’t think people talking about this stuff appreciate that text and internet pictures is just the beginning. Robotics is poised to generate and consume TONS of data from the real world, not just the internet.

zozbot234 7 hours ago | parent | prev | next [-]

> We will run out of additional material to train on

This sounds a bit silly. More training will generally result in better modeling, even for a fixed amount of genuine original data. At current model sizes, it's essentially impossible to overfit to the training data so there's no reason why we should just "stop".

_0ffh 5 hours ago | parent | next [-]

You'd be surprised how quickly improvement of autoregressive language models levels off with epoch count (though, admittedly, one epoch is a LOT). Diffusion language models otoh indeed keep profiting for much longer, fwiw.

pvab3 6 hours ago | parent | prev [-]

I'm just talking about text generated by human beings. You can keep retraining with more parameters on the same corpus

https://proceedings.mlr.press/v235/villalobos24a.html

x-complexity 4 hours ago | parent [-]

> I'm just talking about text generated by human beings.

That in itself is a goalpost shift from

> > We will run out of additional material to train on

Where it is implied "additional material" === "all data, human + synthetic"

------

There's still some headroom left in the synthetic data playground, as cited in the paper linked:

https://proceedings.mlr.press/v235/villalobos24a.html ( https://openreview.net/pdf?id=ViZcgDQjyG )

"On the other hand, training on synthetic data has shown much promise in domains where model outputs are relatively easy to verify, such as mathematics, programming, and games (Yang et al., 2023; Liu et al., 2023; Haluptzok et al., 2023)."

With the caveat that translating this success outside of these domains is hit-or-miss:

"What is less clear is whether the usefulness of synthetic data will generalize to domains where output verification is more challenging, such as natural language."

The main bottleneck for this area of the woods will be (X := how many additional domains can be made easily verifiable). So long as (the rate of X) >> (training absorption rate), the road can be extended for a while longer.

yourapostasy 7 hours ago | parent | prev [-]

Inference leans heavily on GPU RAM and RAM bandwidth for the decode phase where an increasingly greater amount of time is being spent as people find better ways to leverage inference. So NVIDIA users are currently arguably going to demand a different product mix when the market shifts away from the current training-friendly products. I suspect there will be more than enough demand for inference that whatever power we release from a relative slackening of training demand will be more than made up and then some by power demand to drive a large inference market.

It isn’t the panacea some make it out to be, but there is obvious utility here to sell. The real argument is shifting towards the pricing.

SchemaLoad 8 hours ago | parent | prev [-]

How much of the current usage is productive work that's worth paying for vs personal usage / spam that would just drop off after usage charges come in? I imagine flooding youtube and instagram with slop videos would reduce if users had to pay fair prices to use the models.

The companies might also downgrade the quality of the models to make it more viable to provide as an ad supported service which would again reduce utilisation.

unsupp0rted 8 hours ago | parent | next [-]

For any "click here and type into a box" job for which you'd hire a low-skilled worker and give them an SOP to follow, you can have an LLM-ish tool do it.

And probably for the slightly more skilled email jobs that have infiltrated nearly all companies too.

Is that productive work? Well if people are getting paid, often a multiple of minimum wage, then it's productive-seeming enough.

greree 2 hours ago | parent [-]

Another bozo making fun of other job classes.

Why are there still customer service reps? Shouldn’t they all be gone by now due to this amazing technology?

Ah, tumbleweed.

bethekidyouwant 3 hours ago | parent | prev [-]

Who is generating videos for free?

stingraycharles 8 hours ago | parent | prev | next [-]

Exactly, the current spend on LLMs is based on extremely high expectations and the vendors operating at a loss. It’s very reasonable to assume that those expectations will not be met, and spending will slow down as well.

Nvidia’s valuation is based on the current trend continuing and even increasing, which I consider unlikely in the long term.

bigyabai 7 hours ago | parent [-]

> Nvidia’s valuation is based on the current trend continuing

People said this back when Folding@Home was dominated by Team Green years ago. Then again when GPUs sold out for the cryptocurrency boom, and now again that Nvidia is addressing the LLM demand.

Nvidia's valuation is backstopped by the fact that Russia, Ukraine, China and the United States are all tripping over themselves for the chance to deploy it operationally. If the world goes to war (which is an unfortunate likelihood) then Nvidia will be the only trillion-dollar defense empire since the DoD's Last Supper.

matthewdgreen 7 hours ago | parent [-]

China is restricting purchases of H200s. The strong likelihood is that they're doing this to promote their own domestic competitors. It may take a few years for those chips to catch up and enter full production, but it's hard to envision any "trillion dollar" Nvidia defense empire once that happens.

bigyabai 7 hours ago | parent [-]

It's very easy to envision. America needs chips, and Intel can't do most of this stuff.

zozbot234 7 hours ago | parent [-]

Intel makes GPUs.

bigyabai 6 hours ago | parent [-]

Intel's GPU designs make AMD look world-class by comparison. Outside of transcode applications, those Arc cards aren't putting up a fight.

selfhoster11 10 hours ago | parent | prev [-]

They already are paying off. The nature of LLMs means that they will require expensive, fast hardware that's a large capex.

kortilla 10 hours ago | parent | next [-]

They aren’t yet because the big providers that paid for all of this GPU capacity aren’t profitable yet.

They continually leap frog each other and shift around customers which indicates that the current capacity is already higher than what is required for what people actually pay for.

MrDarcy 9 hours ago | parent [-]

Google, Amazon, and Microsoft aren’t profitable?

notyourwork 9 hours ago | parent | next [-]

I assume the reference was AI use cases are not profitable. Those companies are subsidizing and OpenAI/grok are burning money.

lossyalgo 6 hours ago | parent | next [-]

Yeah but OpenAI is adding ads this year for the free versions, which I'm guessing is most of their users. They are probably hedging on taking a big slice of Google's advertising monopoly-pie (which is why Google is also now all-in on forcing Gemini opt-out on every product they own, they can see the writing on the wall).

onion2k 6 hours ago | parent | prev [-]

Google, Amazon, and Microsoft do a lot of things that aren't profitable in themselves. There is no reason to believe a company will kill a product line just because it makes a loss. There are plenty of other reasons to keep it running.

josefx 9 hours ago | parent | prev | next [-]

Aren't all Microsoft products OpenAI based? OpenAI has always been burning money.

wolfram74 8 hours ago | parent | prev | next [-]

Do you think it's odd you only listed companies with already existing revenue streams and not companies that started with and only have generative algos as their product?

dangus 8 hours ago | parent | prev | next [-]

How many business units have Google and Microsoft shut down or ceased investment for being unprofitable?

I hear Meta is having massive VR division layoffs…who could have predicted?

Raw popularity does not guarantee sustainability. See: Vine, WeWork, MoviePass.

8 hours ago | parent | prev [-]
[deleted]
Forgeties79 9 hours ago | parent | prev [-]

Where? Who’s in the black?

lairv 12 hours ago | parent | prev | next [-]

NVIDIA stock tanked in 2025 when people learned that Google used TPUs to train Gemini, which everyone in the community knows since at least 2021. So I think it's very likely that NVIDIA stock could crash for non-rationale reasons

edit: 2025* not 2024

readthenotes1 10 hours ago | parent | next [-]

It also tanked to ~$90 when Trump announced tariffs on all goods for Taiwan except semiconductors.

I don't know if that's non-rational, or if people can't be expected to read the second sentence of an announcement before panicking.

Loudergood 10 hours ago | parent | next [-]

The market is full of people trying to anticipate how other people are going to react and exploit that by getting there first. There's a layer aimed at forecasting what that layer is going to do as well.

It's guesswork all the way down.

Terr_ 5 hours ago | parent | next [-]

A bunch of "Greater Fool" motivation too.

https://en.wikipedia.org/wiki/Greater_fool_theory

recursive 10 hours ago | parent | prev | next [-]

Personally, I try to predict how others are going to predict that yet others will react.

svnt 8 hours ago | parent | next [-]

You jerk

nealabq 6 hours ago | parent [-]

Third-derivative pun.

Riposte: I knew you'd say that! Snap!

MrOrelliOReilly 8 hours ago | parent | prev [-]

And I just predict how you’ll predict

Nevermark 2 hours ago | parent [-]

So we have a closed instability/volatility amplification loop. Great: Time for the straddle with finger-cross trade.

gpderetta 8 hours ago | parent | prev [-]

Keynesian beauty contest.

gertlex 9 hours ago | parent | prev | next [-]

This was also on top of claims (Jan 2025) that Deepseek showed that "we don't actually need as much GPU, thus NVidia is less needed"; at least it was my impression this was one of the (now silly-seeming) reasons NVDA dropped then.

mschuster91 9 hours ago | parent | prev [-]

> I don't know if that's non-rational, or if people can't be expected to read the second sentence of an announcement before panicking.

These days you have AI bots doing sentiment based training.

If you ask me... all these excesses are a clear sign for one thing, we need to drastically rein in the stonk markets. The markets should serve us, not the other way around.

Der_Einzige 9 hours ago | parent | prev [-]

Google did not use TPUs for literally every bit of compute that led to Gemini. GCP has millions of high end Nvidia GPUs and programming for them is an order of magnitude easier, even for googlers.

Any claim from google that all of Gemini (including previous experiments) was trained entirely by TPUs is lies. What they are truthfully saying is that the final training run was done on all TPUs. The market shouldn’t react heavily to this, but instead should react positively to the fact that google is now finally selling TPUs externally and their fab yields are better than expected.

djsjajah 9 hours ago | parent | next [-]

> including all previous experiments

How far back do you go? What about experiments into architecture features that didn’t make the cut? What about pre-transformer attention?

But more generally, why are you so sure that they team that built Gemini didn’t exclusively use TPUs while they were developing it?

I think that one of the reasons that Gemini caught up so quickly is because they have so much compute at fraction of the price of everyone else.

notyourwork 9 hours ago | parent | prev [-]

Why should it not react heavily? What’s stopping this from being a start of a trend for google and even Amazon?

mnky9800n 13 hours ago | parent | prev | next [-]

I really don't understand the argument that nvidia GPUs only work for 1-3 years. I am currently using A100s and H100s every day. Those aren't exactly new anymore.

mbrumlow 11 hours ago | parent | next [-]

It’s not that they don’t work. It’s how businesses handle hardware.

I worked at a few data centers on and off in my career. I got lots of hardware for free or on the cheap simply because the hardware was considered “EOL” after about 3 years, often when support contracts with the vendor ends.

There are a few things to consider.

Hardware that ages produce more errors, and those errors cost, one way or another.

Rack space is limited. A perfectly fine machine that consumes 2x the power for half the output cost. It’s cheaper to upgrade a perfectly fine working system simply because it performs better per watt in the same space.

Lastly. There are tax implications in buying new hardware that can often favor replacement.

fooker 11 hours ago | parent | next [-]

I’ll be so happy to buy a EOL H100!

But no, there’s none to be found, it is a 4 year, two generations old machine at this point and you can’t buy one used at a rate cheaper than new.

pixl97 10 hours ago | parent | next [-]

Well demand is so high currently that it's likely this cycle doesn't exist yet for fast cards.

For servers I've seen where the slightly used equipment is sold in bulk to a bidder and they may have a single large client buy all of it.

Then around the time the second cycle comes around it's split up in lots and a bunch ends up at places like ebay

lancekey 7 hours ago | parent [-]

Yea looking at 60 day moving average on computeprices.com H100 have actually gone UP in cost recently, at least to rent.

A lot of demand out there for sure.

aswegs8 11 hours ago | parent | prev | next [-]

Not sure why this "GPUs obsolete after 3 years" gets thrown around all the time. Sounds completely nonsensical.

belval 10 hours ago | parent | next [-]

Especially since AWS still have p4 instances that are 6 years old A100s. Clearly even for hyperscalers these have a useful life longer than 3 years.

tuckerman 7 hours ago | parent | prev | next [-]

I agree that there is hyperbole thrown around a lot here and its possible to still use some hardware for a long time or to sell it and recover some cost but my experience in planning compute at large companies is that spending money on hardware and upgrading can often result in saving money long term.

Even assuming your compute demands stay fixed, its possible that a future generation of accelerator will be sufficiently more power/cooling efficient for your workload that it is a positive return on investment to upgrade, more so when you take into account you can start depreciating them again.

If your compute demands aren't fixed you have to work around limited floor space/electricity/cooling capacity/network capacity/backup generators/etc and so moving to the next generation is required to meet demand without extremely expensive (and often slow) infrastructure projects.

zozbot234 7 hours ago | parent [-]

Sure, but I don't think most people here are objecting to the obvious "3 years is enough for enterprise GPUs to become totally obsolete for cutting-edge workloads" point. They're just objecting to the rather bizarre notion that the hardware itself might physically break in that timeframe. Now, it would be one thing if that notion was supported by actual reliability studies drawn from that same environment - like we see for the Backblaze HDD lifecycle analyses. But instead we're just getting these weird rumors.

bmurphy1976 10 hours ago | parent | prev [-]

It's because they run 24/7 in a challenging environment. They will start dying at some point and if you aren't replacing them you will have a big problem when they all die en masse at the same time.

These things are like cars, they don't last forever and break down with usage. Yes, they can last 7 years in your home computer when you run it 1% of the time. They won't last that long in a data center where they are running 90% of the time.

zozbot234 9 hours ago | parent | next [-]

A makeshift cryptomining rig is absolutely a "challenging environment" and most GPUs by far that went through that are just fine. The idea that the hardware might just die after 3 years' usage is bonkers.

Der_Einzige 9 hours ago | parent [-]

Crypto miners undervote for efficiency GPUs and in general crypto mining is extremely light weight on GPUs compared to AI training or inference at scale

Der_Einzige 9 hours ago | parent | prev [-]

With good enough cooling they can run indefinitely!!!!! The vast majority of failures are either at the beginning due to defects or at the end due to cooling! It’s like the idea that no moving parts (except the HVAC) is somehow unreliable is coming out of thin air!

SequoiaHope 8 hours ago | parent | prev [-]

There’s plenty on eBay? But at the end of your comment you say “a rate cheaper than new” so maybe you mean you’d love to buy a discounted one. But they do seem to be available used.

fooker 5 hours ago | parent [-]

> so maybe you mean you’d love to buy a discounted one

Yes. I'd expect 4 year old hardware used constantly in a datacenter to cost less than when it was new!

(And just in case you did not look carefully, most of the ebay listings are scams. The actual product pictured in those are A100 workstation GPUs.)

JMiao 10 hours ago | parent | prev [-]

Do you know how support contract lengths are determined? Seems like a path to force hardware refreshes with boilerplate failure data carried over from who knows when.

linkregister 13 hours ago | parent | prev | next [-]

The common factoid raised in financial reports is GPUs used in model training will lose thermal insulation due to their high utilization. The GPUs ostensibly fail. I have heard anecdotal reports of GPUs used for cryptocurrency mining having similar wear patterns.

I have not seen hard data, so this could be an oft-repeated, but false fact.

Melatonic 12 hours ago | parent | next [-]

It's the opposite actually - most GPU used for mining are run at a consistent temp and load which is good for long term wear. Peaky loads where the GPU goes from cold to hot and back leads to more degradation because of changes in thermal expansion. This has been known for some time now.

Yizahi 12 hours ago | parent | next [-]

That is commonly repeated idea, but it doesn't take into account countless token farms which are smaller than a datacenter. Basically anything from a single MB with 8 cards to a small shed with rigs, all of which tend to disregard common engineering practices and run hardware into a ground to maximize output until next police raid or difficulty bump. Plenty of photos in the internet of crappy rigs like that, and no one guarantees which GPU comes whom where.

Another commonly forgotten issue is that many electrical components are rated by hours of operation. And cheaper boards tend to have components with smaller tolerances. And that rated time is actually a graph, where hour decrease with higher temperature. There were instances of batches of cards failing due to failing MOSFETs for example.

Melatonic 9 hours ago | parent | next [-]

While I'm sure there are small amateur setups done poorly that push cards to their limits this seems like a more rare and inefficient use. GPUS (even used) are expensive and running them at maximum would require large costs and time to be replacing them regularly. Not to mention the increased cost of cooling and power.

Not sure I understand the police raid mentality - why are the police raiding amateur crypto mining setups ?

I can totally see cards used by casual amateurs being very worn / used though - especially your example of single mobo miners who were likely also using the card for gaming and other tasks.

I would imagine that anyone purposely running hardware into the ground would be running cheaper / more efficient ASICS vs expensive Nvidia GPUs since they are much easier and cheaper to replace. I would still be surprised however if most were not proritising temps and cooling

coryrc 11 hours ago | parent | prev | next [-]

Specifically, we expect a halving of lifetime per 10K increase in temperature.

whaleofatw2022 11 hours ago | parent | prev | next [-]

Let's also not forget the set of miners that either overclock or dont really care about long term in how they set up thermals

belval 10 hours ago | parent [-]

Miners usually don't overclock though. If anything underclocking is the best way to improve your ROI because it significantly reduces the power consumption while retaining most of the hashrate.

Melatonic 9 hours ago | parent | next [-]

Exactly - more specifically undervolting. You want the minimum volts going to the card with it still performing decently.

Even in amateur setups the amount of power used is a huge factor (because of the huge draw from the cards themselves and AC units to cool the room) so minimising heat is key.

From what I remember most cards (even CPUs as well) hit peak efficiency when undervolted and hitting somewhere around 70-80% max load (this also depends on cooling setup). First thing to wear out would probably be the fan / cooler itself (repasting occasionally would of course help with this as thermal paste dries out with both time and heat)

bluGill 7 hours ago | parent [-]

The only amatures I know doing this are trying to heat their garrage for free. so long as the heat gain is paid for they can afford to heat an otherwise unheated building.

zozbot234 9 hours ago | parent | prev [-]

Wouldn't the exact same considerations apply to AI training/inference shops, seeing as gigawatts are usually the key constraint?

WalterBright 7 hours ago | parent | prev [-]

Why would police raid a shed housing a compute center?

mbesto 11 hours ago | parent | prev [-]

Source?

zozbot234 12 hours ago | parent | prev | next [-]

> I have heard anecdotal reports of GPUs used for cryptocurrency mining having similar wear patterns.

If this was anywhere close to a common failure mode, I'm pretty sure we'd know that already given how crypto mining GPUs were usually ran to the max in makeshift settings with woefully inadequate cooling and environmental control. The overwhelming anecdotal evidence from people who have bought them is that even a "worn" crypto GPU is absolutely fine.

munk-a 12 hours ago | parent | prev [-]

I can't confirm that fact - but it's important to acknowledge that consumer usage is very different from the high continuous utilization in mining and training. It is credulous that the wear on cards under such extreme usage is as high as reported considering that consumers may use their cards at peak 5% of waking hours and the wear drop off is only about 3x if it is used near 100% - that is a believable scale for endurance loss.

denimnerd42 12 hours ago | parent | prev | next [-]

1-3 is too short but they aren’t making new A100s, theres 8 in a server and when one goes bad what do you do? you wont be able to renew a support contract. if you wanna diy you eventually you have to start consolidating pick and pulls. maybe the vendors will buy them back from people who want to upgrade and resell them. this is the issue we are seeing with A100s and we are trying to see what our vendor will offer for support.

iancmceachern 13 hours ago | parent | prev | next [-]

They're no longer energy competitive. I.e. the amount of power per compute exceeds what is available now.

It's like if your taxi company bought taxis that were more fuel efficient every year.

bob1029 13 hours ago | parent | next [-]

Margins are typically not so razor thin that you cannot operate with technology from one generation ago. 15 vs 17 mpg is going to add up over time, but for a taxi company it's probably not a lethal situation to be in.

SchemaLoad 8 hours ago | parent | next [-]

At least with crypto mining this was the case. Hardware from 6 months ago is useless ewaste because the new generation is more power efficient. All depends on how expensive the hardware is vs the cost of power.

11 hours ago | parent | prev | next [-]
[deleted]
iancmceachern 10 hours ago | parent | prev [-]

Tell that to the airline industry

bob1029 10 hours ago | parent | next [-]

I don't think the airline industry is a great example from an IT perspective, but I agree with regard to the aircraft.

hibikir 8 hours ago | parent | prev [-]

And yet they aren't running planes and engines all from 2023 or beyond: See the MD-11 that crashed in Louisville: Nobody has made a new MD-11 in over 20 years. Planes move to less competitive routes, change carriers, and eventually might even stop carrying people and switch to cargo, but the plane itself doesn't get to have zero value when the new one comes out. An airline will want to replace their planes, but a new plane isn't fully amortized in a year or three: It still has value for quite a while

mikkupikku 13 hours ago | parent | prev | next [-]

If a taxi company did that every year, they'd be losing a lot of money. Of course new cars and cards are cheaper to operate than old ones, but is that difference enough to offset buying a new one every one to three years?

gruez 12 hours ago | parent | next [-]

>If a taxi company did that every year, they'd be losing a lot of money. Of course new cars and cards are cheaper to operate than old ones, but is that difference enough to offset buying a new one every one to three years?

That's where the analogy breaks. There are massive efficiency gains from new process nodes, which new GPUs use. Efficiency improvements for cars are glacial, aside from "breakthroughs" like hybrid/EV cars.

dylan604 12 hours ago | parent | prev | next [-]

>offset buying a new one every one to three years?

Isn't that precisely how leasing works? Also, don't companies prefer not to own hardware for tax purposes? I've worked for several places where they leased compute equipment with upgrades coming at the end of each lease.

mikkupikku 11 hours ago | parent | next [-]

Who wants to buy GPUs that were redlined for three years in a data center? Maybe there's a market for those, but most people already seem wary of lightly used GPUs from other consumers, let alone GPUs that were burning in a crypto farm or AI data center for years.

dylan604 9 hours ago | parent | next [-]

> Who wants to buy

who cares? that's the beauty of the lease. once it's over, the old and busted gets replaced with new and shiny. what the leasing company does is up to them. it becomes one of those YP not an MP situations with deprecated equipment.

bluGill 7 hours ago | parent [-]

The leasing company cares - the lease terms depend on the answer. That is why I can lease a car for 3 years for the same payment as a 6 year loan (more or less) - the lease company expects someone will want it. If there is no market for it after they will still lease it but the cost goes up

coryrc 11 hours ago | parent | prev | next [-]

Depends on the price, of course. I'm wary of paying 50% of new for something run hard 3 years. Seems an NVIDIA H100 is going for $20k+ on EBay. I'm not taking that risk.

pixl97 9 hours ago | parent | prev [-]

Depending on the discount, a lot of people.

gowld 12 hours ago | parent | prev [-]

That works either because someone wants to buy old hardware for the manufacturer/lessor, or because the hardware is EOL in 3 years but it's easier to let the lessor deal with recyling / valuable parts recovery.

wordpad 12 hours ago | parent | prev | next [-]

If your competitor refreshes their cards and you dont, they will win on margin.

You kind of have to.

lazide 12 hours ago | parent [-]

Not necessarily if you count capital costs vs operating costs/margins.

Replacing cars every 3 years vs a couple % in efficiency is not an obvious trade off. Especially if you can do it in 5 years instead of 3.

iancmceachern 7 hours ago | parent | next [-]

You highlight the exact dilemma.

Company A has taxis that are 5 percent less efficient and for the reasons you stated doesn't want to upgrade.

Company B just bought new taxis, and they are undercutting company A by 5 percent while paying their drivers the same.

Company A is no longer competitive.

Dylan16807 6 hours ago | parent [-]

The debt company B took on to buy those new taxis means they're no longer competitive either if they undercut by 5%.

The scenario doesn't add up.

iancmceachern 5 hours ago | parent [-]

But Company A also took on debt for theirs, so that's a wash. You assume only one of them has debt to service?

Dylan16807 5 hours ago | parent [-]

Both companies bought a set of taxis in the past. Presumably at the same time if we want this comparison to be easy to understand.

If company A still has debt from that, company B has that much debt plus more debt from buying a new set of taxis.

Refreshing your equipment more often means that you're spending more per year on equipment. If you do it too often, then even if the new equipment is better you lose money overall.

If company B wants to undercut company A, their advantage from better equipment has to overcome the cost of switching.

iancmceachern 3 hours ago | parent [-]

You are assuming something again.

They both refresh their equipment at the same rate.

Dylan16807 2 hours ago | parent [-]

> They both refresh their equipment at the same rate.

I wish you'd said that upfront. Especially because the comment you replied to was talking about replacing at different rates.

So your version, if company A and B are refreshing at the same rate, then that means six months before B's refresh company A had the newer taxis. You implied they were charging similar amounts at that point, so company A was making bigger profits, and had been making bigger profits for a significant time. So when company B is able to cut prices 5%, company A can survive just fine. They don't need to rush into a premature upgrade that costs a ton of money, they can upgrade on their normal schedule.

TL;DR: six months ago company B was "no longer competitive" and they survived. The companies are taking turns having the best tech. It's fine.

zozbot234 12 hours ago | parent | prev [-]

You can sell the old, less efficient GPUs to folks who will be running them with markedly lower duty cycles (so, less emphasis on direct operational costs), e.g. for on-prem inference or even just typical workstation/consumer use. It ends up being a win-win trade.

lazide 10 hours ago | parent [-]

Then you’re dealing with a lot of labor to do the switches (and arrange sales of used equipment), plus capital float costs while you do it.

It can make sense at a certain scale, but it’s a non trivial amount of cost and effort for potentially marginal returns.

pixl97 9 hours ago | parent [-]

Building a new data center and getting power takes years to double your capacity. Swapping out out a rack that is twice as fast takes very little time in comparison.

lazide 9 hours ago | parent [-]

Huh? What does your statements have to do with what I’m saying?

I’m just pointing out changing it out at 5 years is likely cheaper than at 3 years.

pixl97 8 hours ago | parent [-]

Depends at the rate of growth of the hardware. If your data center is full and fully booked, and hardware is doubling in speed every year it's cheaper to switch it out every couple of years.

philwelch 12 hours ago | parent | prev [-]

If there was a new taxi every other year that could handle twice as many fares, they might. That’s not how taxis work but that is how chips work.

echelon 13 hours ago | parent | prev [-]

Nvidia has plenty of time and money to adjust. They're already buying out upstart competitors to their throne.

It's not like the CUDA advantage is going anywhere overnight, either.

Also, if Nvidia invests in its users and in the infrastructure layouts, it gets to see upside no matter what happens.

mbesto 12 hours ago | parent | prev | next [-]

Not saying your wrong. A few things to consider:

(1) We simply don't know what the useful life is going to be because of how new the advancements of AI focused GPUs used for training and inference.

(2) Warranties and service. Most enterprise hardware has service contracts tied to purchases. I haven't seen anything publicly disclosed about what these contracts look like, but the speculation is that they are much more aggressive (3 years or less) than typical enterprise hardware contracts (Dell, HP, etc.). If it gets past those contracts the extended support contracts can typically get really pricey.

(3) Power efficiency. If new GPUs are more power efficient this could be huge savings on energy that could necessitate upgrades.

epolanski 11 hours ago | parent | next [-]

Nvidia is moving to a 1 year release life cycle for data center, and in Jensen's words once a new gen is released you lose money for being on the older hardware. It makes no longer financially sense to run it.

pixl97 9 hours ago | parent [-]

That will come back to bite them in the ass if money leaves the AI race.

pvab3 7 hours ago | parent | prev [-]

based on my napkin math, an H200 needs to run for 4 years straight at maximum power (10.2 kW) to consume its own price of $35k worth of energy (based on 10 cents per kWh)

swalsh 10 hours ago | parent | prev | next [-]

If power is the bottleneck, it may make business sense to rotate to a GPU that better utilizes the same power if the newer generation gives you a significant advantage.

legitster 12 hours ago | parent | prev | next [-]

From an accounting standpoint, it probably makes sense to have their depreciation be 3 years. But yeah, my understanding is that either they have long service lives, or the customers sell them back to the distributor so they can buy the latest and greatest. (The distributor would sell them as refurbished)

savorypiano 12 hours ago | parent | prev | next [-]

You aren't trying to support ad-based demand like OpenAI is.

linuxftw 12 hours ago | parent | prev [-]

I think the story is less about the GPUs themselves, and more about the interconnects for building massive GPU clusters. Nvidia just announced a massive switch for linking GPUs inside a rack. So the next couple of generations of GPU clusters will be capable of things that were previously impossible or impractical.

This doesn't mean much for inference, but for training, it is going to be huge.

nospice 13 hours ago | parent | prev | next [-]

> My 30k ft view is that the stock will inevitably slide as AI datacenter spending goes down.

Their stock trajectory started with one boom (cryptocurrencies) and then seamlessly progressed to another (AI). You're basically looking at a decade of "number goes up". So yeah, it will probably come down eventually (or the inflation will catch up), but it's a poor argument for betting against them right now.

Meanwhile, the investors who were "wrong" anticipating a cryptocurrency revolution and who bought NVDA have not much to complain about today.

mysteria 12 hours ago | parent | next [-]

Personally I wonder even if the LLM hype dies down we'll get a new boom in terms of AI for robotics and the "digital twin" technology Nvidia has been hyping up to train them. That's going to need GPUs for both the ML component as well as 3D visualization. Robots haven't yet had their SD 1.1 or GPT-3 moment and we're still in the early days of Pythia, GPT-J, AI Dungeon, etc. in LLM speak.

iwontberude 10 hours ago | parent [-]

Exactly, they will pivot back to AR/VR

mysteria 10 hours ago | parent [-]

That's going to tank the stock price though as that's a much smaller market than AI, though it's not going to kill the company. Hence why I'm talking about something like robotics which has a lot of opportunity to grow and make use of all those chips and datacenters they're building.

Now there's one thing with AR/VR that might need this kind of infrastructure though and that's basically AI driven games or Holodeck like stuff. Basically have the frames be generated rather than modeled and rendered traditionally.

bigyabai 10 hours ago | parent [-]

Nvidia's not your average bear, they can walk and chew bubblegum at the same time. CUDA was developed off money made from GeForce products, and now RTX products are being subsidized by the money made on CUDA compute. If an enormous demand for efficient raster compute arises, Nvidia doesn't have to pivot much further than increasing their GPU supply.

Robotics is a bit of a "flying car" application that gets people to think outside the box. Right now, both Russia and Ukraine are using Nvidia hardware in drones and cruise missiles and C2 as well. The United States will join them if a peer conflict breaks out, and if push comes to shove then Europe will too. This is the kind of volatility that crazy people love to go long on.

munk-a 13 hours ago | parent | prev | next [-]

That's the rub - it's clearly overvalued and will readjust... the question is when. If you can figure out when precisely then you've won the lottery, for everyone else it's a game of chicken where for "a while" money that you put into it will have a good return. Everyone would love if that lasted forever so there is a strong momentum preventing that market correction.

jama211 12 hours ago | parent [-]

It was overvalued when crypto was happening too, but another boom took its place. Of course, lightening rarely strikes twice and all that, but it proves overvalued doesn’t mean the price is guaranteed to go down it seems. Predicting the future is hard.

pixl97 9 hours ago | parent | next [-]

As they say, the market can remain irrational far longer than you can remain solvent.

jama211 2 hours ago | parent [-]

Hah! Indeed

sidrag22 11 hours ago | parent | prev [-]

if there was anything i was going to bet against between 2019 and now, it was nvidia... and wow it feels wild how much in the opposite direction it went.

I do wonder what people would think the reasoning would be for them to increase in value this much back then, prolly would just assume crypto related still.

jama211 2 hours ago | parent [-]

It’s not impossible they could’ve seen AI investment coming but it would’ve been very hard

ericmcer 12 hours ago | parent | prev [-]

Crypto & AI can both be linked to part of a broader trend though, that we need processors capable of running compute on massive sets of data quickly. I don't think that will ever go down, whether some new tech emerges or we just continue shoveling LLMs into everything. Imagine the compute needed to allow every person on earth to run a couple million tokens through a model like Anthropic Opus every day.

pixl97 9 hours ago | parent [-]

Agreed, single thread performance increases are dead and things are moving to massively parallel processing.

JakeSc 8 hours ago | parent | prev | next [-]

Agree on looking at the company-behind-the-numbers. Though presumably you're aware of the Efficient Market Hypothesis. Shouldn't "slowed down datacenter growth" be baked into the stock price already?

If I'm understanding your prediction correctly, you're asserting that the market thinks datacenter spending will continue at this pace indefinitely, and you yourself uniquely believe that to be not true. Right? I wonder why the market (including hedge fund analysis _much_ more sophisticated than us) should be so misinformed.

Presumably the market knows that the whole earth can't be covered in datacenters, and thus has baked that into the price, no?

matthewdgreen 7 hours ago | parent | next [-]

The EMH does not mean that markets are free of over-investment and asset bubbles, followed by crashes.

testdelacc1 7 hours ago | parent | prev [-]

I saw a $100 bill on the ground. I nearly picked it up before I stopped myself. I realised that if it was a genuine currency note, the Efficient Market would have picked it up already.

8 hours ago | parent | prev | next [-]
[deleted]
AnotherGoodName 12 hours ago | parent | prev | next [-]

I'll also point out there were insane takes a few years ago before nVidia's run up based on similar technical analysis and very limited scope fundamental analysis.

Technical analysis fails completely when there's an underlying shift that moves the line. You can't look at the past and say "nvidia is clearly overvalued at $10 because it was $3 for years earlier" when they suddenly and repeatedly 10x earnings over many quarters.

I couldn't get through to the idiots on reddit.com/r/stocks about this when there was non-stop negativity on nvidia based on technical analysis and very narrow scoped fundamental analysis. They showed a 12x gain in quarterly earnings at the time but the PE (which looks on past quarters only) was 260x due to this sudden change in earnings and pretty much all of reddit couldn't get past this.

I did well on this yet there were endless posts of "Nvidia is the easiest short ever" when it was ~$40 pre-split.

KeplerBoy 13 hours ago | parent | prev | next [-]

Also there's no way Nvidia's market share isn't shrinking. Especially in inference.

gpapilion 13 hours ago | parent | next [-]

The large api/token providers, and large consumers are all investing in their own hardware. So, they are in an interesting position where the market is growing, and NVIDIA is taking the lion's share of enterprise, but is shrinking at the hyperscaler side (google is a good example as they shift more and more compute to TPU). So, they have a shrinking market share, but its not super visible.

zozbot234 12 hours ago | parent [-]

> The large api/token providers, and large consumers are all investing in their own hardware.

Which is absolutely the right move when your latest datacenter's power bill is literally measured in gigawatts. Power-efficient training/inference hardware simply does not look like a GPU at a hardware design level (though admittedly, it looks even less like an ordinary CPU), it's more like something that should run dog slow wrt. max design frequency but then more than make up for that with extreme throughput per watt/low energy expense per elementary operation.

The whole sector of "neuromorphic" hardware design has long shown the broad feasibility of this (and TPUs are already a partial step in that direction), so it looks like this should be an obvious response to current trends in power and cooling demands for big AI workloads.

dogma1138 13 hours ago | parent | prev | next [-]

Market share can shrink but if the TAM is growing you can still grow.

blackoil 13 hours ago | parent | prev [-]

But will the whole pie grow or shrink?

richardw 10 hours ago | parent | prev | next [-]

I’m sad about Grok going to them, because the market needs the competition. But ASIC inference seems to require a simpler design than training does, so it’s easier for multiple companies to enter. It seems inevitable that competition emerges. And eg a Chinese company will not be sold to Nvidia.

What’s wrong with this logic? Any insiders willing to weigh in?

bigyabai 10 hours ago | parent [-]

I'm not an insider, but ASICs come with their own suite of issues and might be obsolete if a different architecture becomes popular. They'll have a much shorter lifespan than Nvidia hardware in all likelihood, and will probably struggle to find fab capacity that puts them on equal footing in performance. For example, look at the GPU shortage that hit crypto despite hundreds of ASIC designs existing.

The industry badly needs to cooperate on an actual competitor to CUDA, and unfortunately they're more hostile to each other today than they were 10 years ago.

zozbot234 6 hours ago | parent [-]

You can build ASICs to be a lot more energy efficient than current GPUs, especially if your power budget is heavily bound by raw compute as opposed to data movement bandwidth. The tradeoff is much higher latency for any given compute throughput, but for workloads such as training or even some kinds of "deep thinking inference" you don't care much about that.

cortesoft 11 hours ago | parent | prev | next [-]

> The problem I have with this argument is that it's simply unsustainable to be spending that much every 2-3 years

Isn’t this entirely dependent on the economic value of the AI workloads? It all depends on whether AI work is more valuable than that cost. I can easily see arguments why it won’t be that valuable, but if it is, then that cost will be sustainable.

alfalfasprout 11 hours ago | parent [-]

100% this. all of this spending is predicated on a stratospheric ROI on AI investments at the proposed investment levels. If that doesn't pan out, we'll see a lot of people left holding the cards including chip fabs, designers like Nvidia, and of course anyone that ponied up for that much compute.

jiggawatts 7 hours ago | parent [-]

Chip fabs will be fine. The demand for high end processors will remain because of the likes of Apple and AMD.

baxtr 13 hours ago | parent | prev | next [-]

I no AI fanboy at all. I think it there won’t be AGI anytime soon.

However, it’s beyond my comprehension how anyone would think that we will see a decline in demand growth for compute.

AI will conquer the world like software or the smartphone did. It’ll get implemented everywhere, more people will use it. We’re super early in the penetration so far.

Ekaros 13 hours ago | parent | next [-]

At this point computation is in essence commodity. And commodities have demand cycles. If other economic factors slowdown or companies go out of business they stop using compute or start less new products that use compute. Thus it is entirely realistic to me that demand for compute might go down. Or that we are just now over provisioning compute in short or medium term.

galaxyLogic 12 hours ago | parent | next [-]

I wonder, is the quality of AI answers going up over time or not? Last weekend I spent a lot of time with Preplexity trying to understand why my SeqTrack device didn't do what I wanted it to do and seems Perplexity had a wrong idea of how the buttons on the device are laid out, so it gave me wrong or confusing answers. I spent literally hours trying to feed it different prompts to get an answer that would solve my problem.

If it had given me the right easy to understand answer right away I would have spent 2 minutes of both MY time and ITS time. My point is if AI will improve we will need less of it, to get our questions answered. Or, perhaps AI usage goes up if it improves its answers?

lorddumpy 8 hours ago | parent | next [-]

With vision models (SOTA models like Gemini and ChatGPT can do this), you can take a picture/screenshot of the button layout, upload it, and have it work from that. Feeding it current documentation (eg a pdf of a user manual) helps too.

Referencing outdated documentation or straight up hallucinating answers is still an issue. It is getting better with each model release though

zozbot234 12 hours ago | parent | prev | next [-]

If the AI hasn't specifically learned about SeqTracks as part of its training it's not going to give you useful answers. AI is not a crystal ball.

SchemaLoad 8 hours ago | parent [-]

The problem is it's inability to say "I don't know". As soon as you reach the limits of the models knowledge it will readily start fabricating answers.

galaxyLogic 7 hours ago | parent | next [-]

Both true. Perplexity knows a lot about SeqTrack, I assume it has read the UserGuide. But some things it gets wrong, seems especially things it should understand by looking at the pictures.

I'm just wondering if there's a clear path for it to improve and on what time-table. The fact that it does not tell you when it is "unsure" of course makes things worse for users. (It is never unsure).

CamperBob2 6 hours ago | parent | prev [-]

That's nowhere near as true as it was as recently as a year ago.

jama211 12 hours ago | parent | prev [-]

Always worth trying a different model, especially if you’re using a free one. I wouldn’t take one data point to seriously either.

The data is very strongly showing the quality of AI answers is rapidly improving. If you want a good example, check out the sixty symbols video by Brady Haran, where they revisited getting AI to answer a quantum physics exam after trying the same thing 3 years ago. The improvement is IMMENSE and unavoidable.

wordpad 12 hours ago | parent | prev [-]

So...like Cisco during dot com bust?

Ekaros 12 hours ago | parent [-]

More so I meant to think of oil, copper and now silver. All follow demand for the price. All have had varying prices at different times. Compute should not really be that different.

But yes. Cisco's value dropped when there was not same amount to spend on networking gear. Nvidia's value will drop as there is not same amount of spend on their gear.

Other impacted players in actual economic downturn could be Amazon with AWS, MS with Azure. And even more so those now betting on AI computing. At least general purpose computing can run web servers.

Ronsenshi 12 hours ago | parent | prev | next [-]

What if its penetration ends up being on the same level as modern crypto? Average person doesn't seem to particularly care about meme coins or bitcoin - it is not being actively used in day to day setting, there's no signs of this status improving.

Doesn't mean that crypto is not being used, of course. Plenty of people do use things like USDT, gamble on bitcoin or try to scam people with new meme coins, but this is far from what crypto enthusiasts and NFT moguls promised us in their feverish posts back in the middle of 2010s.

So imagine that AI is here to stay, but the absolutely unhinged hype train will slow down and we will settle in some kind of equilibrium of practical use.

infecto 12 hours ago | parent [-]

I have still been unable to see how folks connect AI to Crypto. Crypto never connected with real use cases. There are some edge cases and people do use it but there is not a core use.

AI is different and businesses are already using it a lot. Of course there is hype, it’s not doing all the things the talking heads said but it does not mean immense value is not being generated.

Ronsenshi 12 hours ago | parent [-]

It's an analogy, it doesn't have to map 1:1 to AI. The point is that current situation around AI looks kind of similar to the situation and level of hype around Crypto when it was still growing: all the "ledger" startups, promises of decentralization, NFTs in video games and so on. We are somewhere around that point when it comes to AI.

infecto 5 hours ago | parent | next [-]

No it’s an absolutely ridiculous comparison that people continue to make even though AI has well past the usefulness of crypto and at an alarming rate of speed. AI has unlocked so many projects my team would never have tackled before.

lorddumpy 8 hours ago | parent | prev [-]

I agree with the all the startups but AI is already much more useful in everyday tasks vs crypto.

Eg: A chatbot assistant is much more tangible to the regular joe than blockchain technology

marricks 13 hours ago | parent | prev [-]

> I no AI fanboy at all.

While thinking computers will replace human brains soon is rabid fanaticism this statement...

> AI will conquer the world like software or the smartphone did.

Also displays a healthy amount of fanaticism.

jwoods19 12 hours ago | parent | next [-]

Even suggesting that computers will replace human brains brings up a moral and ethical question. If the computer is just as smart as a person, then we need to potentially consider that the computer has rights.

As far as AI conquering the world. It needs a "killer app". I don't think we'll really see that until AR glasses that happen to include AI. If it can have context about your day, take action on your behalf, and have the same battery life as a smartphone...

xenospn 11 hours ago | parent | prev [-]

I don’t see this as fanaticism at all. No one could predict a billion people mindlessly scrolling tiktok in 2007. This is going to happen again, only 10x. Faster and more addictive, with content generated on the fly to be so addictive, you won’t be able to look away.

Yossarrian22 9 hours ago | parent [-]

Vine was around then

m12k 8 hours ago | parent | prev | next [-]

I think the way to think about the AI bubble is that we're somewhere in 97-99 right now, heading toward the dotcom crash. The dotcom crash didn't kill the web, it kept growing in the decades that followed, influencing society more and more. But the era where tons of investments were uncritically thrown at anything to do with the web ended with a bang.

When the AI bubble bursts, it won't stop the development of AI as a technology. Or its impact on society. But it will end the era of uncritically throwing investments at anyone that works "AI" into their pitch deck. And so too will it end the era of Nvidia selling pickaxes to the miners and being able to reach soaring heights of profitability born on wings of pretty much all investment capital in the world at the moment.

enos_feedler 8 hours ago | parent [-]

Bubble or not it’s simply strange to me that people confidently put a timeline on it. To name the phases of the bubble and calling when they will collapse just seems counter intuitive to what a bubble is. Brad Gerstner was the first “influencer” I heard making these claims of a bubble time line. It just seems downright absurd.

WalterBright 7 hours ago | parent | prev | next [-]

> technical analysis of the stock

AKA pictures in clouds

throwaway85825 8 hours ago | parent | prev | next [-]

It's not flat growth that's currently priced in, but continuing high growth. Which is impossible.

kqr 11 hours ago | parent | prev | next [-]

Fundamental analysis is great! But I have trouble answering concrete questions of probability with it.

How do you use fundamental analysis to assign a probability to Nvidia closing under $100 this year, and what probability do you assign to that outcome?

I'd love to hear your reasoning around specifics to get better at it.

djeastm 9 hours ago | parent | next [-]

I think the idea of fundamental analysis that you focus on return on equity and see if that valuation is appreciably more than the current price (as opposed to assigning a probability)

esafak 10 hours ago | parent | prev | next [-]

Don't you need a model for how people will react to the fundamentals? People set the price.

kqr 9 hours ago | parent [-]

Possibly? I don't know -- hence the question!

GP was presenting fundamental analysis as an alternative to the article's method for answering the question, but then never answered the question.

This is a confusion I have around fundamental analysis. Some people appear to do it very well (Buffett?) but most of its proponents only use it to ramble about possibilities without making any forecasts speciic enough to be verifiable.

I'm curious about that gap.

8 hours ago | parent | prev [-]
[deleted]
TacticalCoder 6 hours ago | parent | prev | next [-]

> This article goes more into the technical analysis of the stock rather than the underlying business fundamentals that would lead to a stock dump. My 30k ft view is that the stock will inevitably slide as AI

Actually "technical analysis" (TA) has a very specific meaning in trading: TA is using past prices, volume of trading and price movements to, hopefully, give probabilities about future price moves.

https://en.wikipedia.org/wiki/Technical_analysis

But TFA doesn't do that at all: it goes in detail into one pricing model formula/method for options pricing. In the typical options pricing model all you're using is current price (of the underlying, say NVDA), strike price (of the option), expiration date, current interest rate and IV (implied volatility: influenced by recent price movements but independently of any technical analysis).

Be it Black-Scholes-Merton (european-style options), Bjerksund-Stensland (american-style options), binomial as in TFA, or other open options pricing model: none of these use technical analysis.

Here's an example (for european-style options) where one can see the parameters:

https://www.mystockoptions.com/black-scholes.cfm

You can literally compute entire options chains with these parameters.

Now it's known for a fact that many professional traders firms have their own options pricing method and shall arb when they think they find incorrectly priced options. I don't know if some use actual so forms of TA that they then mix with options pricing model or not.

> My 30k ft view is that the stock will inevitably slide as AI datacenter spending goes down.

No matter if you're right or not, I'd argue you're doing what's called fundamental analysis (but I may be wrong).

P.S: I'm not debatting the merits of TA and whether it's reading into tea leaves or not. What I'm saying is that options pricing using the binomial method cannot be called "technical analysis" for TA is something else.

jwoods19 13 hours ago | parent | prev | next [-]

“In a gold rush, sell shovels”… Well, at some point in the gold rush everyone already has their shovels and pickaxes.

krupan 13 hours ago | parent | next [-]

Or people start to realize that the expected gold isn't really there and so stop buying shovels

12 hours ago | parent [-]
[deleted]
gopher_space 12 hours ago | parent | prev [-]

The version I heard growing up was "In a gold rush, sell eggs."

FergusArgyll 10 hours ago | parent [-]

Selling jeans is the one that actually worked

jpadkins 9 hours ago | parent | prev | next [-]

How much did you short the stock?

stego-tech 11 hours ago | parent | prev | next [-]

Add in the fact companies seriously invested in AI (and like workloads typically reliant on GPUs) are also investing more into bespoke accelerators, and the math for nVidia looks particularly grim. Google’s TPUs set them apart from the competition, as does Apple’s NPU; it’s reasonable to assume firms like Anthropic or OpenAI are also investigating or investing into similar hardware accelerators. After all, it’s easier to lock-in customers if your models cannot run on “standard” kit like GPUs and servers, even if it’s also incredibly wasteful.

The math looks bad regardless of which way the industry goes, too. A successful AI industry has a vested interest in bespoke hardware to build better models, faster. A stalled AI industry would want custom hardware to bring down costs and reduce external reliance on competitors. A failed AI industry needs no GPUs at all, and an inference-focused industry definitely wants custom hardware, not general-purpose GPUs.

So nVidia is capitalizing on a bubble, which you could argue is the right move under such market conditions. The problem is that they’re also alienating their core customer base (smaller datacenters, HPC, gaming market) in the present, which will impact future growth. Their GPUs are scarce and overpriced relative to performance, which itself has remained a near-direct function of increased power input rather than efficiency or meaningful improvements. Their software solutions - DLSS frame-generation, ray reconstruction, etc - are locked to their cards, but competitors can and have made equivalent-performing solutions of their own with varying degrees of success. This means it’s no longer necessary to have an nVidia GPU to, say, crunch scientific workloads or render UHD game experiences, which in turn means we can utilize cheaper hardware for similar results. Rubbing salt in the wound, they’re making cards even more expensive by unbundling memory and clamping down on AIB designs. Their competition - Intel and AMD primarily - are happily enjoying the scarcity of nVidia cards and reaping the fiscal rewards, however meager they are compared to AI at present. AMD in particular is sitting pretty, powering four of the five present-gen consoles, the Steam Deck (and copycats), and the Steam Machine, not to mention outfits like Framework; if you need a smol but capable boxen on the (relative) cheap, what used to be nVidia + ARM is now just AMD (and soon, Intel, if they can stick the landing with their new iGPUs).

The business fundamentals paint a picture of cannibalizing one’s evergreen customers in favor of repeated fads (crypto and AI), and years of doing so has left those customer markets devastated and bitter at nVidia’s antics. Short of a new series of GPUs with immense performance gains at lower price and power points with availability to meet demand, my personal read is that this is merely Jenson Huang’s explosive send-off before handing the bag over to some new sap (and shareholders) once the party inevitably ends, one way or another.

bArray 11 hours ago | parent | prev | next [-]

> My 30k ft view is that the stock will inevitably slide as AI datacenter spending goes down. Right now Nvidia is flying high because datacenters are breaking ground everywhere but eventually that will come to an end as the supply of compute goes up.

Exactly, it is currently priced as though infinite GPUs are required indefinitely. Eventually most of the data centres and the gamers will have their GPUs, and demand will certainly decrease.

Before that, though, the data centres will likely fail to be built in full. Investors will eventually figure out that LLMs are still not profitable, no matter how many data centres you produce. People are interested in the product derivatives at a lower price than it costs to run them. The math ain't mathin'.

The longer it takes to get them all built, the more exposed they all are. Even if it turns out to be profitable, taking three years to build a data centre rather than one year is significant, as profit for these high-tech components falls off over time. And how many AI data centres do we really need?

I would go further and say that these long and complex supply chains are quite brittle. In 2019, a 13 minute power cut caused a loss of 10 weeks of memory stock [1]. Normally, the shops and warehouses act as a capacitor and can absorb small supply chain ripples. But now these components are being piped straight to data centres, they are far more sensitive to blips. What about a small issue in the silicon that means you damage large amounts of your stock trying to run it at full power through something like electromigration [2]. Or a random war...?

> The counterargument to this is that the "economic lifespan" of an Nvidia GPU is 1-3 years depending on where it's used so there's a case to be made that Nvidia will always have customers coming back for the latest and greatest chips. The problem I have with this argument is that it's simply unsustainable to be spending that much every 2-3 years and we're already seeing this as Google and others are extending their depreciation of GPU's to something like 5-7 years.

Yep. Nothing about this adds up. Existing data centres with proper infrastructure are being forced to extend use for previously uneconomical hardware because new data centres currently building infrastructure have run the price up so high. If Google really thought this new hardware was going to be so profitable, they would have bought it all up.

[1] https://blocksandfiles.com/2019/06/28/power-cut-flash-chip-p...

[2] https://www.pcworld.com/article/2415697/intels-crashing-13th...

cheschire 12 hours ago | parent | prev | next [-]

Well, not to be too egregiously reductive… but when the M2 money supply spiked in the 2020 to 2022 timespan, a lot of new money entered the middle class. That money was then funneled back into the hands of the rich through “inflation”. That left the rich with a lot of spare capital to invest in finding the next boom. Then AI came along.

Once the money dries up, a new bubble will be invented to capture the middle class income, like NFTs and crypto before that, and commissionless stocks, etc etc

It’s not all pump-and-dump. Again, this is a pretty reductive take on market forces. I’m just saying I don’t think it’s quite as unsustainable as you might think.

clownpenis_fart 13 hours ago | parent | prev [-]

[dead]