Remix.run Logo
EagnaIonat 4 hours ago

I read the actual article.

He is pointing out that the current costs to create the data centres means you will never be able to make a profit to cover those costs. $800 Billion just to cover the interest.

OpenAI is already haemorrhaging money and the space data centres has already been debunked. There is even a recent paper that points out that LLMs will never become AGI.

The article also finishes out with some other experts giving the same results.

[edit] Fixed $80 to $800

Glemkloksdjf 23 minutes ago | parent | next [-]

Sry to say but the fact that you argue with LLMs never become AGI, you are not up-to-date.

People don't assume LLM will be AGI, people assume that World Models will lead us to AGI.

I personally never asumed LLM will become AGI, i always assumed that LLM broke the dam for investment and research into massivce scale compute ML learning and LLMs are very very good in showing were the future goes because they are already so crazy good that people can now imagine a future were AGI exists.

And that was very clear already when / as soon as GPT-3 came out.

The next big thing will probably be either a LOT more RL or self propelling ai architecture discovery. Both need massive compute to work well but then will potentially provide even faster progress as soon as humans are out of the loop.

noobermin 3 hours ago | parent | prev | next [-]

$800B, to be clear is the claim, not $80B.

EagnaIonat 2 hours ago | parent [-]

Clearly I need to read slower. Thanks. :)

ta12653421 3 hours ago | parent | prev [-]

>> There is even a recent paper that points out that LLMs will never become AGI.

can you share a link?

EagnaIonat 3 hours ago | parent | next [-]

Took me a while to find again, as there are a lot of such papers in this area.

https://www.arxiv.org/pdf/2511.18517

will4274 3 hours ago | parent | next [-]

Is this AI paper written by a reputable subject matter expert? It seems to be written by a physicist and also be the only academic work by this author in English

EagnaIonat 2 hours ago | parent | next [-]

So you are dismissing it because of that? Certainly read the paper first and attack the arguments, not the author. It even has 10 pages of citations.

I have read it. It is nothing new on the subject, but it was just the recent paper I saw on HN and the person was asking for the link.

The crux is an LLM is and can never be intelligent in the sense of an AGI. It is easier to think of it as a way to store and retrieve knowledge.

quietbritishjim an hour ago | parent [-]

How many articles on this topic do we imagine there are? Thousands? Hundreds of thousands? It is hopeless to read every one by any author, no matter how unrelated to the domain, and judge them individually on their merits. Being a subject domain expert is not a perfect measure of paper quality but it's the only feasible way to make a first pass at filtering.

Even if I did read it, I have no hope of understanding if it has made a fundamental mistake because I don't have the subject matter expertise either.

(I imagine it has made a fundamental mistake anyway: for LLMs to be useful progress toward AGI they don't have to be a feasible way to create AGI by themselves. Innovation very often involves stepping through technologies that end up only being a component of the final solution, or inspiration for the final solution. This was always going to be an issue with trying to prove a negative.)

EagnaIonat 18 minutes ago | parent [-]

> It is hopeless to read every one by any author,

It was a paper posted on HN a few days ago and someone asked for the evidence of my statement. I supplied it.

Now if they actually read it and disagreed with what it was saying, I'd be more than happy to continue the conversation.

Dismissing it just because you don't understand is a terrible thing to do to yourself. It's basically sabotaging your intelligence.

Sometimes papers are garbage, but you can only make that statement after you have read/understood it.

Use an LLM if you want.

rvnx 2 hours ago | parent | prev [-]

Absolutely. If it is not written by someone who has real world experience and deep knowledge it has no more value than a HN comment.

EagnaIonat 2 hours ago | parent [-]

It's a good read and good citations.

The core piece as quoted from the abstract: "AGI predictions fail not from insufficient compute, but from fundamental misunderstanding of what intelligence demands structurally."

Then goes in detail as to what that is and why LLMs don't fit that. There are plenty other similar papers out there.

rvnx an hour ago | parent [-]

It was more of a general principle than about specific paper that I mentioned that :)

mkl 3 hours ago | parent | prev [-]

A single author, in a physics department. Seems unlikely to be groundbreaking or authoritative.

EagnaIonat 3 hours ago | parent | next [-]

Welcome to the world of papers. Have a read and get back to us. Dismissing out of hand is rarely constructive.

trueno an hour ago | parent [-]

took me a while but i read it. thought it was actually a pretty good and well researched paper that does a good job rationalizing its thesis. thanks for sharing

rf15 42 minutes ago | parent | prev [-]

Ad hominem right out of the gate? Really?

3 hours ago | parent | prev [-]
[deleted]