Remix.run Logo
JCM9 3 days ago

OpenAI is generating $13B a year in revenue. Let’s be generous and say $20B. They’ve signed commitments to spend something like $1.4 trillion on compute. An asset that to date has proven to have a hyper-depreciation cycle.

Someone has to come up with $1.4 trillion in actual cash, fast, or this whole thing comes crashing down. Why? At the end of all this circular financing and deals are folks that actually want real cash (eg electricity utilities that aren’t going to accept OpenAI shares for payment).

If the above doesn’t freak you about a bit at how bonkers this whole thing has become then you need a reality check. “Selling ads” on ChatGPT ain’t gonna close that hole.

Aurornis 3 days ago | parent | next [-]

> Someone has to come up with $1.4 trillion in actual cash, fast, or this whole thing comes crashing down.

These deals aren't for 100% payment up front. The deals also include stock, not just cash. So, no, they do not need to come up with $1.4 trillion in cash quickly.

This AWS deal is spread over 7 years. That's $5.4 billion per year, though I assume it's ramping up over time.

> At the end of all this circular financing and deals are folks that actually want real cash (eg electricity utilities that aren’t going to accept OpenAI shares for payment).

Amazon's cash on hand is on the order of $100 billion. They also have constant revenue coming in. They will not have any problem accepting OpenAI shares and then paying electricity bills with cash.

These deals are also being done in the open with publicly traded companies. Investors can see the balance sheets and react accordingly in the stock price.

mandevil 3 days ago | parent | next [-]

Interestingly, it looks like there is a move away from financing these data centers with tech company cash-on-hand and moving to Special Purpose Vehicles over the past 18 months or so. So now there is a lot more debt involved in funding DC's than equity, in ways that are a sudden change to what was largely a funded-by-equity process at the beginning of 2024.

The one I found best documented (1) is a Meta's SPV to fund their Hyperion DC in Louisiana, which is a deal that is 80% financed by private credit firm Blue Owl. There is a lot of financial trickery to getting the SPV to be counted by the ratings agencies as debt belonging to a different entity that does not count against Meta's books but treated by the market as basically something that Meta will back. But xAI's Memphis DC is also a SPV, and Microsoft is doing that as well. I'm not sure about AMZN, but that we're starting to see that from their competitors suggests they will also be going to this way.

1: By the invaluable Matt Levine, here: https://www.bloomberg.com/opinion/newsletters/2025-10-29/put... but the other major companies have their own SPV's

brendoelfrendo 3 days ago | parent [-]

I saw this, and honestly, it's kind of silly. We all know what's going on, so why do the credit ratings agencies play dumb to this kind of financial engineering? Why don't they just say "actually no, we all know that's debt and it's owned by Meta so we will consider it when rating their credit."?

lesuorac 3 days ago | parent | next [-]

IIUC, they ignore it because they're supposed to.

If the market collapses I think Meta can technically just walk away and they lose access to those data centers (which they no longer want anyways) and the SPV is stuck holding $X of assets with $>X liabilities and the issues of the credit are on the hook but not Meta.

And investors are fine being on the hook because they get a higher return from the SPV bonds than Meta bonds. (risk adjusted it's probably the same return).

JumpCrisscross 3 days ago | parent | prev | next [-]

> We all know what's going on

Do we?

The payments Meta et al are making to the SPV are payments for data-center services. The data centers are then buying the assets and issuing the debt. Now, Meta is obligated to make those payments to the SPV. Which looks like debt. But they are only obligated to do so if the services are being provided.

Blue Owl, meanwhile, owns 80% of the datacentre venture. If the price of chips crashes, that's Blue Owl's problem. Not Meta's. If Meta terminates their contract, same deal. (If Beijing nukes Taiwan and the chips quintuple in value, that's Blue Owl's gain. Mostly. Not Meta's.)

> Why don't they just say "actually no, we all know that's debt and it's owned by Meta so we will consider it when rating their credit."?

If Meta stopped paying the SPV, the SPV would have the recourse of a vendor. If Meta stopped making payments on its bonds, that would trigger cross defaults, et cetera. Simply put, Meta has more optionality with this structure than it would if it issued its own debt.

The red flag to keep an eye out for are cross guarantees, i.e. Meta, directly or indirectly, guaranteeing the SPV's debt.

cmiles8 3 days ago | parent | prev | next [-]

Because, to quote from The Big Short, “if we don’t give them the rating they want they’ll just walk down the street and go to [the other ratings agency].”

Does that make any sense? No.

nickff 3 days ago | parent [-]

In the case of “The Big Short” it did make sense, because the ratings were required by the government, not the purchasers (who often/usually disregarded the ratings for the purpose of valuation), and the sellers paid for the ratings.

rchaud 3 days ago | parent | prev | next [-]

Because in credit ratings game, the customer is paying to get their bonds rated. Therefore the customer is always right.

JumpCrisscross 3 days ago | parent [-]

> the customer is paying to get their bonds rated. Therefore the customer is always right

Then Meta would do this in a wholly-controlled off balance sheet vehicle à la Enron. The fact that they're involving side cars signals some respect for their rating.

eiifndjj18484 3 days ago | parent | prev [-]

the point is to get pension money into the market, whilst ringfencing the risk in an SPV so that when/if it pops, it’s none of the people who do actually know what’s happening that will be affected. And they’ll potentially be shorting it on the way down as well

slg 3 days ago | parent | prev [-]

>These deals are also being done in the open with publicly traded companies. Investors can see the balance sheets and react accordingly in the stock price.

I'm no expert on the specifics of the circular financing we're seeing here so the rest of what you wrote might be true, but I know enough about how Wall Street and the world in general works to know that closing with this as a defense shows an incredible naivete that makes me question everything else you have said.

epistasis 3 days ago | parent | next [-]

Indeed, a comment above linked to Matt Levine's newsletter on the off-books debt that is showing up instead as things like JVs, and here's another Bloomberg Reporter, Carmen Arroyo, covering it from a more journalistic angle:

https://www.bloomberg.com/news/articles/2025-10-31/meta-xai-...

refulgentis 3 days ago | parent | prev | next [-]

No need for all that, the idea OpenAI is committed to $1.4 trillion in pay is a Ed Zitron-sourced number where he calculates $400B based on a number he made up for how much a gigawatt costs, and the trillion figure by multiplying further by claiming every deal is for 2026 and will be repeated over next N years.

peaseagee 3 days ago | parent | prev [-]

Exactly. Enron was a publicly traded company doing weird circular financing stuff. It was all in the open for anyone who cared to look. Just no one did until the music stopped...

refulgentis 3 days ago | parent [-]

We’re a bit too far if we assert this. The weird circular Enron stuff wasn’t all in the open, was by wholly owned subsidiaries, and the downfall was massive trading losses that could no longer be hidden by shuttling money to and from subsidiaries at the right time. A hole in a balance sheet is quite different from a purchase done by financing, thus “circular financing” when applied to both means “things we worry about that involve payments between 2 entities”

JumpCrisscross 3 days ago | parent | prev | next [-]

“OpenAI CEO Sam Altman sounded exasperated when Altimeter Capital founder—and OpenAI shareholder—Brad Gerstner asked him the question that Gerstner said was ‘hanging over the market’: how a company generating $13 billion in revenue this year would pay for the $1.4 trillion in computing capacity that Altman has said the company is on the hook for.

‘Brad, if you want to sell shares, I’ll find you a buyer…I just—enough,’ Altman said on Gerstner’s podcast.”

https://www.theinformation.com/articles/ilya-saw-mira-murati...

dgfitz 3 days ago | parent [-]

> I’ll find you a buyer…I just—enough,’ Altman said on Gerstner’s podcast.”

Hopefully nobody reading this has experienced it: these are the words of a true sociopath/addict.

"I'm mad you questioned me" is fucking classic.

I told dang I was out and I am after this. Sorry dang.

tim333 3 days ago | parent | next [-]

I think it's a bit out of order of Altman. $1.4tn is ~16 times the US foreign aid budget. These are significant, solve world hunger type numbers that should be analysed seriously and not done on the basis of trust me bro.

JumpCrisscross 3 days ago | parent [-]

> not done on the basis of trust me bro

It's not. It's done on the basis of don't question me bro.

Imustaskforhelp 3 days ago | parent | prev [-]

> I told dang I was out and I am after this. Sorry dang.

Sorry but is there some lore behind it as I feel like the last sentence has me wondering what it means. If you could share the lore, I would really appreciate it.

but overall, I agree that this is a very weird thing to say by Sam Altman

gpt800 3 days ago | parent [-]

dang is the hackernews moderator

https://news.ycombinator.com/user?id=dang

anon7000 3 days ago | parent [-]

Yeah, but why would it matter to us if they tell dang they’re out? That’s the missing context.

Imustaskforhelp 2 days ago | parent | next [-]

Yes, and I was asking for this thing when I had created the comment. I know who dang is, but I want to know what's the missing context.

kennyadam 2 days ago | parent | prev [-]

Agreed. Very confusing.

parsimo2010 3 days ago | parent | prev | next [-]

You're probably right about how disconnected the spending vs. revenue is, but I've also seen the entire USA's public debt go so high that it requires nearly $1 trillion per year just to service the interest payments [1]. That sounds ludicrous to me too, and yet somehow the economy is booming.

There are two important points by Keynes that are relevant:

1. The market can remain irrational longer than you can remain solvent. Even if you're betting on a crash, it will probably happen after you get margin called and lose all your money. You can be absolutely right about where this is headed, but keep your personal investments away from this.

2. The value of a company isn't determined by any sound fundamentals. It's determined by how much you can get a sucker to pay (aka Keynes' castles in the air theory). Until we run out of suckers OpenAI will be able to keep getting cash infusions to pay whoever actually demands cash instead of stock. And as long as there are suckers that are CEOs of big tech companies they are going to be getting really big cash infusions.

[1] https://www.pgpf.org/programs-and-projects/fiscal-policy/mon...

raincole 3 days ago | parent | next [-]

The logical conclusion is that we don't have an AI bubble. We have a USD flood. Or consequentially, fiat floods. You see stupid expected valuation of OpenAI et al. not because investors are stupid. It's because there is a stupid amount of USD and it has to go somewhere. You either get real estate bubble or AI bubble or whatever bubble.

AbstractH24 2 days ago | parent | next [-]

I'm at a loss for what asset class can even protect against such an implosion.

Because as an American, they are all effectively denominated in USD. Even Bitcoin, which everyone claims to be the savior.

And while I don't know as much about other countries, something tells me most Western countries and their currencies are equally as exposed.

tarsinge 2 days ago | parent | prev [-]

And when everything is a bubble then it’s simply that money has just less value overall. Remember asset inflation is not accounted into CPI, the money surplus/devaluation can take a long time to trickle down into the consumer economy.

RA_Fisher 3 days ago | parent | prev [-]

Or, maybe you don’t understand why it’s rational?

jonas21 3 days ago | parent | prev | next [-]

The $1.4T commitment is spread over multiple years. Let's assume 4 -- then that's $350B/year. Coincidentally, Google had $350B in revenue in 2024 (and projected to be ~$400B in 2025).

It's certainly possible to imagine OpenAI eventually generating far more revenue than Google, even without anything close to AGI. For example, if they were to improve productivity of 10% of the economy by 10% and capture a third of that value for themselves, that would be more than enough. Alternatively, displacing Google as the go-to place for search and selling ads against that would likely generate at least Google levels of revenue. Or some combination of both.

Is this guaranteed to happen? Of course not. But it's not in "bonkers" territory either.

Aurornis 3 days ago | parent | next [-]

> The $1.4T commitment is spread over multiple years. Let's assume 4

The Amazon deal is actually spread over 7 years. Other deals have different terms, but also spread over multiple years.

Deals like these have cancellation terms. OpenAI could presumably pay a fee and cancel in the future if their projections are too high and they don't need some of the compute from these deals.

The deals also include OpenAI shares. The deals are being made with companies that have sufficient revenue or even cash on hand to buy the compute and electricity.

The claim above that someone needs to come up with $1.4 trillion right now or everything will collapse isn't grounded in any real understanding of these deals. It's just adding up numbers and comparing them to a single annual revenue snapshot.

cmiles8 3 days ago | parent [-]

I don’t think the OP is saying $1.4 trillion cash is needed “right now.” The point being made is simply that with all the circular deals and financing for this to make sense OpenAI does need to generate $1.4 trillion in cash that can eventually work its way through the economy to pay for all of this. Hype and inflated valuations can be built on numbers on paper but real business are built on cash flow. The OP is simply calling out the lack of cash flow.

Even under the most bullish cases for AI the real $ requires here looks iffy at best.

I think we all know that a big part of the angle here is to keep the hype going until there’s a liquidity event, folks will cash out and then at the like they won’t care what happens.

thrance 3 days ago | parent | prev | next [-]

So their only reasonable plan is to capture a significant portion of the global economy through a tech that we have currently no idea how to build? Seems a little dodgy, to say the least. I would personally consider it well in "bonkers" territory.

Libidinalecon 2 days ago | parent [-]

I think we had been primed for so long by science fiction that the talking computer was always going to put us in this state of mass stupidity.

The fun part is to go back now and listen to Blake Lemoine interviews from summer 2022. That for me was the start of all this.

JumpCrisscross 3 days ago | parent | prev | next [-]

> if they were to improve productivity of 10% of the economy by 10% and capture a third of that value for themselves, that would be more than enough

This is “if we get 1% of the market” logic.

vanviegen 3 days ago | parent [-]

That type of logic is not inherently flawed, is it?

Of course, you must also make a convincing case for getting to that 1%.

JumpCrisscross 3 days ago | parent [-]

> That type of logic is not inherently flawed, is it?

Inherently, no. In practice, it's riddled with biases deep enough [1] to make it an informal fallacy.

"The competition in a large market, such as CRM software, is very tough," and "there are power laws which mean that you have to rank surprisingly high to get 1% of a market" [2]. Strategically, it ignores the necessity of establishing a beachhead in a small market, where "a small software company" has "a much better chance of getting a decent sized chunk."

[1] https://www.nature.com/articles/s41599-024-03403-9

[2] https://news.ycombinator.com/item?id=45804756

2 days ago | parent [-]
[deleted]
wavemode 3 days ago | parent | prev | next [-]

Google themselves are an AI company (in case anyone forgot) - if LLM-powered search is going to become a popular product, then that's great news for Google. They already have an LLM capable of searching the Web, and they've already integrated it heavily into their search engine, browser, mobile phones, and Office suite.

OpenAI has nothing resembling this ecosystem, and will never be nearly as valuable a place to buy ads. Replacing Google is probably the least realistic business plan for OpenAI - if that's what they're betting on, they're cooked.

mnky9800n 3 days ago | parent | prev | next [-]

The problem is that 10% productivity increase could be captured by workers with having to work 10% less but because everything everywhere is essentially leveraged they now have to fill that 10% gap to pay off the leverage. That’s probably wrong so I’m sure someone will explain to me why.

ivape 3 days ago | parent | prev [-]

Google is under existential threat. In that case, OpenAI has a very legitimate trillion dollar case for carving out a piece of Google.

Search engines were never a user friendly app to begin with. You had to know how to search well to get comprehensive answers, and the average person is not that scrupulous. Google’s product is inferior, believe it or not. There will be nothing normal about seeing a list of search results pretty soon, so Google literally has a legacy app out in the wild as far as facts are concerned.

So imagine that, Google would have to remove Search as they know it (remove their core business) and standup a app that looks the same as all the new apps.

People might like one AI persona more than others, which means people will seek out all types of new apps. LLMs is the worst thing that could have ever happened to Google quite frankly.

tim333 3 days ago | parent | next [-]

Google pretty much invented LLMs. The Attention is Attention Is All You Need paper which kicked it off was done by Google scientists and the top model in the LLMArena for text is from Google. They also made $28 bn profit last quarter as against large losses for OpenAI. I think they'll survive.

I'd be more worried about OpenAI surviving. Aside from the iffy finances, much of their top talent seems to leave after falling out with Altman.

rubiquity 3 days ago | parent | prev | next [-]

I find it more likely that the entire "second" level of software companies are in OpenAI's cross hairs more so than Google. Salesforce, ServiceNow, Intuit, DocuSign, Adobe, Workday, Atlassian, and countless others are easier to pick off than Google.

hattmall 3 days ago | parent | next [-]

Those don't seem like reasonable targets at all to me. OpenAI's product is information and their power is engagement. It's more like a cross between Facebook that thrives on engagement and Google that delivers information.

Googles biggest advancement in the last ~15 years is to produce worse search results so that you spend more time engaging with Google, and doing more searches, so that Google can show more ads. Facebook is similar in that they feed you tons of rage-bait, engagement spam, and things you don't like infused with nuggets of what you actually want to see about your friends / interests. Just like a slot machine the point is that you don't always get what you want, so there's a compulsion to use because MAYBE you will get lucky.

OpenAI's potential for mooning hinges on creating a fusion of information and engagement where they can sell some sort of advertisement or influence. The problem of course is that the information and engagement is pretty much coming in the most expensive form possible.

The idea that the LLM is going to erode actual products people find useful enough to pay for is unlikely to come true. In particular people are specifically paying for software because of it's deterministic behavior. The LLM is by its nature extremely nondeterministic. That's fully in the realm of social media, search engines, etc. If you want a repeatable and predictable result the LLM isn't really the go to product.

ivape 3 days ago | parent | prev [-]

Not every kid born in the last five years will know Google as a verb as we do. They’ll be adults in 15 years, which is a paltry investment timeline for the type of Black Swan event we’re talking about, which AI is.

I don’t disagree with you entirely, but I’d argue the second level apps are harder to chase because they get so specialized.

Death of Google (as everyone knows Google today) is a tricky one. It seems impossible to believe at this exact moment. It can sit next to IBM in the long run, no shame at all, amazing run.

dvt 3 days ago | parent | prev [-]

Very true. I rarely find myself "Googling" anymore. I'd rather just ask ChatGPT. Even if the enshittification (ads, etc.) will happen down the line, at least we'll have an absolutely awesome product (like Google was to Yahoo) for 5-10 years.

OpenAI is at the very least worth at least half as much as Google. I foresee Google becoming like IBM, and these new LLM companies being the new generation of tech companies.

lumost 3 days ago | parent | prev | next [-]

If OpenAI continues on their current revenue growth trajectory, they should be larger than AWS by 2027. Burning 2x revenue to grow that fast is not really a concern beyond your continued ability to attract financing. Given the trajectory of inference cost, it unlikely that they would fail to reach profitability.

The big question would be how much of this revenue is unjustifiably circular, and how much of it is extractable - but those are questions for when the growth slows. Im certain every supplier has ways to back out of these commitments if the finances look shaky.

hiq 3 days ago | parent | next [-]

> Given the trajectory of inference cost, it unlikely that they would fail to reach profitability.

Is there evidence that their revenues are growing faster than their costs?

versteegen 2 days ago | parent | next [-]

The place to go for those numbers is https://epoch.ai/data/ai-companies

Very little data about expenses, but it looks like they may be growing a little slower (3-4x a year) than revenue. Which makes sense because inference and training get more efficient over time.

lumost 3 days ago | parent | prev [-]

We don't have evidence one way or the other. But from the public statements the idea that they lose roughly their revenue seems constant over time. It's possible that that is simply a psychological barrier for investors. Meaning they grow their losses at roughly 2x their revenue growth rate.

vel0city 3 days ago | parent [-]

> Given the trajectory of inference cost, it unlikely that they would fail to reach profitability.

> We don't have evidence one way or the other

I don't see how both of these things can be true. How can we know something to be likely or unlikely if we have no evidence of how things are?

If we don't have any evidence they're moving towards profitability, how is it likely they will become profitable?

lumost a day ago | parent [-]

Growing businesses tend to consume capital. How much capital is appropriate to burn is subjective, but there are good baselines from other industries and internal business justifications. As tech companies burn capital through people time, it's hard to directly figure out what is true CapEx vs. unsustainable burn.

You wouldn't demand that a restaurant jack prices up or shutdown in its first month of business after spending ~1 MM on a remodel to earn ~20k in the first month. You would expect that the restaurant isn't going to remodel again for 5 years and the amortized cost should be ~16k/mo (or less).

Libidinalecon 2 days ago | parent | prev | next [-]

I don't know, this coming month will be the first time that my subscription is going to lapse.

I have got incredible value from ChatGPT up to this point but I have been using it less and less.

What I have mostly extracted from it is a giant list of books I need to read. A summary of the ideas of a book I haven't read is obviously not the same as reading the whole book.

Before all this there were so many areas I was curious about that ChatGPT gave me a nice surface level summary of. I now know much better what I want to focus on but I don't need more surface level summaries.

mvdtnz 3 days ago | parent | prev [-]

https://xkcd.com/605/

jgbuddy 3 days ago | parent | prev | next [-]

The obvious answer is that they are going to IPO

officeplant 3 days ago | parent [-]

I hope so just so I can watch the funny line graph of people burning money.

xarope 3 days ago | parent [-]

it's funny until you realise your pension fund invested heavily in AI and are now down 30%

officeplant 2 days ago | parent [-]

At this point I'll be surprised if the financial company in charge of my 401k exists when I retire. I know there are laws to protect things, but my faith in US laws is dwindling fast.

mv4 3 days ago | parent | prev | next [-]

This circular game is wholly dependent on OpenAI's ability to access public funds via IPO.

browningstreet 3 days ago | parent | prev | next [-]

We had impossible financial projections written up just like this for Uber and WeWork. They’re still here. The MBAs will probably win this too.

hattmall 3 days ago | parent [-]

What is WeWork's market cap today?

mise_en_place 3 days ago | parent | prev | next [-]

It doesn't freak me out and it's actually completely rational. If both OpenAI and AMZN expect real rates to keep rising while inflation spirals out of control, this deal makes a lot of sense for both of them. They're just duration hedging.

JumpCrisscross 3 days ago | parent [-]

> If both OpenAI and AMZN expect real rates to keep rising while inflation spirals out of control, this deal makes a lot of sense for both of them. They're just duration hedging

It can’t be the same hedge on both sides of the trade.

mise_en_place 3 days ago | parent [-]

Correct, oAI is short rates vol.

JumpCrisscross 3 days ago | parent [-]

> Correct, oAI is short rates vol

Why vol? They're just short rates, which is a silly way to say leveraged. If rates become volatile but halve, OpenAI does fine. If rates stabilise at 10%, OpenAI fails. There is no "duration hedging," which for OpenAI would involve buying duration, i.e. bets that profit when rates go up, going on.

Bleehmi 3 days ago | parent | prev | next [-]

Why would it freak me out?

I have not invested in OpenAi.

But the truth is, right now the potential revenue is not achievable with a relevant investment into energy generation.

Interesting rat race which will lead to something. Let's see what it will be

tartoran 2 days ago | parent | prev | next [-]

They're going for too big to fail because failing would wipe out a lot of profits and that's a nono.

rdsubhas 3 days ago | parent | prev | next [-]

The proportion of the utilities involved are a fraction of 1.4T.

tim333 3 days ago | parent | prev | next [-]

Headlines say:

>OpenAI thought to be preparing for $1tn stock market float. ChatGPT developer is considering filing for an IPO by the second half of 2026...

drake99 3 days ago | parent | prev | next [-]

Sam can pay the cloud bill by selling openai shares , it is very expensive and very limitation

xnx 3 days ago | parent | prev | next [-]

Can the "bubble" pop/deflate in a way that just takes out OpenAI? I don't see Google overextended at all.

f4uCL9dNSnQm 3 days ago | parent | next [-]

OpenAI might actually survive, even if investors lose significant part of their investment. It those those companies that took out loans to invest in "AI" or took overpriced shares as a payment that are getting wiped out.

Imustaskforhelp 3 days ago | parent | prev [-]

The bubble will burst and I think it might take the S&P 500 down with it simply because of how damn concentrated it is.

The effects would be devastating to say the least in how I feel like it.

If S&P 500 grew thanks to this AI bubble, it sure as well will shrink as well due to the popping of this bubble too.

There is no free lunch but more precisely I am worried more about the retirement schemes in which people put their money into etc.

Personally I was saying this thing a long time ago that AI feels like a bubble and maybe S&P 500 would have some issues and thus to diversify into international or gold etc. and I was met with criticism because "S&P 500 is growing the fastest so I am wasting money investing in gold etc.", Yea because that's because bubbles can also grow... and they also shrink... and they do both of these things fast.

jstummbillig 3 days ago | parent | prev | next [-]

Let's actually be generous and assume that all parties involved did the math and some due diligence and are not just idiots. If we try that approach, what could that plausibly tell us about a situation where OpenAI has struck deals with not one, but basically all the major chip/infra providers?

dontlikeyoueith 3 days ago | parent | next [-]

> Let's actually be generous and assume that all parties involved did the math and some due diligence and are not just idiots

Economic history strongly suggests this would be a bad assumption.

jstummbillig 3 days ago | parent [-]

How do you mean? Western economic history is, on average, one of success. So on average, that's a pretty good assumption.

ben_w 2 days ago | parent [-]

Western economic history is 75% of businesses failing in the first 15 years, and the market still growing because the last 25% has outsized rewards.

More pertinently, we have a long history of people buying into bubbles only for them to crash hard, no matter how often people tell them "past performance is not a guarantee of future growth" or whatever the legally mandated phrase is for the supply of investment opportunities to the public where you live.

Sometimes the bubbles do useful things before they burst, like the railways. Sometimes the response to the burst creates a bunch of social safety nets, sometimes it leads to wars, sometimes both (e.g. Great Depression).

cmiles8 3 days ago | parent | prev [-]

The history of bubbles strongly suggests this is precisely evidence of a bad decision, not a good one. For a bubble to exist and be sustained everyone needs to get on board with things that wouldn’t normally make any sense.

jstummbillig 3 days ago | parent [-]

See, here the trick is that you assume a bubble and reason from there.

But what if, maybe, it ain't so? Of course, lots of AI things are going to fail, and nobody is exactly sure of the future. But what if, after in depth inspection, the overall thing is actually looking pretty good and OpenAI like a winner?

Libidinalecon a day ago | parent | next [-]

In other words, "It's different this time!"

ben_w 2 days ago | parent | prev [-]

A trillion dollar valuation for a company losing money does naturally lead to the belief "this is a bubble", that's not really what most call an "assumption" as evidence led to the belief.

May be incorrect, but it's not writing down the answer first and working backwards.

> But what if, maybe, it ain't so?

https://www.youtube.com/watch?v=9z70BKwfSUA

Comedic take from last time, but the point at the conclusion remains. "Just this once, we think we might".

> Of course, lots of AI things are going to fail, and nobody is exactly sure of the future. But what if, after in depth inspection, the overall thing is actually looking pretty good and OpenAI like a winner?

Much as I like what LLMs and VLMs can do, much as I think they can provide value to the tune of trillions of USD, I have no confidence that any of this would return to the shareholders. The big players are all in a Red Queen's race, moving as fast as they can just to stay at the same (relative) ranking for the SOTA models; at the same time, once those SOTA models are made, there are ways to compress them effectively with minimal losses of performance, and if you combine that with the current rate of phone hardware improvements it's plausible we'll get {state of the art for 2025} models running on-device sometime between 2027 and 2030, with no money going to any model provider.

confirmmesenpai 3 days ago | parent | prev | next [-]

token usage is growing exponential at all providers.

it will grow even more with the next generation of models.

shellfishgene 3 days ago | parent [-]

Are these tokens paid for by customers, or is it mostly the freebies thrown around by ChatGPT et al.?

3 days ago | parent | prev | next [-]
[deleted]
Razengan 3 days ago | parent | prev | next [-]

> electricity utilities that aren’t going to accept OpenAI shares for payment

What if AI invents fusion power?

(Thanks for the downvotes I wanted to keep my karma at 69)

jdlshore 3 days ago | parent | next [-]

1. There’s no indication that AI is capable of doing so.

2. Outside of software, inventions have to be turned into physical things like power plants. That doesn’t happen overnight and is expensive.

3. The industry is already going through a power revolution in the form of battery + solar and it’s going to take a while for a new technology to climb the learning curve enough to be competitive.

4. What if AI gives us all a pony?

Razengan 3 days ago | parent [-]

What if ChatGPT invents the Matrix? Electricity problem solved.

JumpCrisscross 3 days ago | parent | prev [-]

> Thanks for the downvotes

“Please don't comment about the voting on comments. It never does any good, and it makes boring reading.”

https://news.ycombinator.com/newsguidelines.html

hluska 3 days ago | parent | prev [-]

Is there a reason you’re posting so often on this thread? Everyone gets your point.

JCM9 3 days ago | parent [-]

Fair enough. I guess I’m just like those guys at the investors conference in The Big Short and can’t believe what I’m seeing.

vessenes 3 days ago | parent | next [-]

You'll have your shot at shorting oAI soon apparently. I'm in a lot of these threads on the bull side, and I'll say - please be careful if you do, and only short what you can afford to lose. I'm sure the stock will be crazy volatile, but I don't see signs of anything unsustainable in oAI's ops right now, with the sole exception of increasing training spend using investor money. We're not in a good position outside the company to know if that will pay off. The parts we do know about, inference, users, growth, revenue growth and net income, are all generationally significant, and make shorting really risky.

lesuorac 3 days ago | parent | prev | next [-]

I think the main issue with your theory is that it's $38B in today's dollars. In the 1970s we saw a lot less independence between the Fed and White House and as a consequence severe inflation. Trillions of dollars of liabilities is not going to sound so bad after 4 years of double-digit inflation ...

Also, IIUC the guys in The Big Short would've lost everything if the government stepped in sooner since the banks controlled the price of the CDSs and could've maintained the incorrect price if they had a bunch of extra cash.

ceejayoz 3 days ago | parent [-]

> Also, IIUC the guys in The Big Short would've lost everything if the government stepped in sooner since the banks controlled the price of the CDSs and could've maintained the incorrect price if they had a bunch of extra cash.

Yeah. "Markets can remain irrational longer than you can remain solvent."

https://en.wikipedia.org/wiki/Michael_Burry had an investor panic and nearly lost everything. He was right, but he nearly got the timing wrong.

gretch 3 days ago | parent | prev | next [-]

Why does it matter if everyone else knows or cares?

If you were actually the guys from the big short and you have strong conviction, you should short the market (literally like the guys from big short) and get really rich.

Money is the language they understand, so hit them where it hurts.

Uehreka 3 days ago | parent [-]

People always talk about shorting like it’s an efficient and reliable way to make money being right when everyone else is wrong. But it isn’t.

When you go long, you can still make money by being “sort of right” or “obliquely right” or “somewhat wrong but lucky”or by just collecting dividends if the market stays irrational long enough. If you short something you have to be exactly right (both about what will happen and precisely when) or your money will end up in the hands of the people you’re betting against. It’s not a symmetrical thing you can just switch back and forth on.

WA 3 days ago | parent [-]

Correct and the reason is that borrowing stock for shorting isn't free. You gotta pay interest on that. Or if you go the option route, your options lose value because of time.

hluska 3 days ago | parent | prev | next [-]

You’re using a movie to justify this?

confirmmesenpai 3 days ago | parent | prev [-]

did the price of NVIDIA made sense to you 2 years ago, when a lot of people were screaming it's in an obvious bubble?

if no, and you thought it was a bubble, does that price of NVIDIA from 2 years ago (not from today) makes sense to you now?