Remix.run Logo
We Won't Be Missed: Work and Growth in the Era of AGI [pdf](conference.nber.org)
28 points by Anon84 9 hours ago | 47 comments
seaucre 7 hours ago | parent | next [-]

In the long run, absent intervention, virtually all income flows to the owners of compute.

We need more than UBI. AGI is the culmination of all human activity up to that point and all humanity deserves ownership of it. It should not belong solely to those who put the cherry on top with the rest of us at their mercy. They don't deserve to control the humanity's destiny. AGI, at some point, has to be made into ... I don't know. Not nationalized - something more. A force of pure good for all humans unaffiliated with any corporation or state.

jstummbillig 7 hours ago | parent | next [-]

Why would compute be any less of a commodity than electricity?

cogman10 7 hours ago | parent | next [-]

The same reason why owning a business is less of a commodity than electricity.

It's the ultimate monopoly. Anyone with more compute will ultimately be able to out perform any business you could invent ultimately locking you out of competition.

The owners of compute will make a killing and can set whatever price they like. But if the owner is someone like say amazon, then what actually stops them from using their massive compute army they already own to enter the most lucrative businesses for compute slowly dominating everything?

bryanrasmussen 7 hours ago | parent | prev [-]

attempting to come up with reason here:

because compute is owned and sold by people who have businesses built on top of compute, thus they let you have their excess compute, it follows that their needs will come before yours.

markus_zhang 7 hours ago | parent | prev [-]

You are kinda calling for Communism without spelling it out, I think.

raincole 7 hours ago | parent | next [-]

Of course. If AGI becomes real I don't see any reason to keep a capitalism society. Ultimately capitalism works because it incentivizes people to produce goods and services efficiently. If AI is more efficient than humans in every single aspect then what's the point of giving people economic incentives?

visarga 7 hours ago | parent | next [-]

Things still cost money. There are always scarcities. For example cells already reached exponential self replication capabilities long ago, but eventually hit environment constraints. It became a struggle for survival, but under infinite resources they would infinitely replicate without effort instead of evolving.

red_rech 4 hours ago | parent | next [-]

Don’t worry, the AI cokeheads have told me that LLMs will usher in Star Trek post scarcity.

loa_in_ 6 hours ago | parent | prev [-]

We're much closer to modelling cells than humans, yet we still pretend like we're already there somehow on both accounts.

Ray20 7 hours ago | parent | prev [-]

It's funny that just over 100 years ago, they were saying EXACTLY the same thing about electricity. Ultimately, history has shown all the reasons to to keep a capitalism society.

raincole 6 hours ago | parent [-]

100 years ago people said electricity would be more efficient than humans in every single aspect? They said electricity would invent more efficient way to generate electricity itself?

That's news to me! Some people were really ahead of their time.

Ray20 6 hours ago | parent [-]

> 100 years ago people said electricity would be more efficient than humans in every single aspect?

Yes, literally.

isoprophlex 7 hours ago | parent | prev | next [-]

Let's call it Commonism then. Where we recognize the need for economic activity that furthers whatever we humans have in common. Instead of tumour-like, zero-sum, number must go up turbocapitalism that just concentrates wealth.

dizzydes 7 hours ago | parent | prev | next [-]

If AGI occurs some form of communism will be necessary no? How else will they cover all the costs of UBI? It's our work/earths resources/internet its been born from, it should benefit us all.

Ray20 7 hours ago | parent [-]

How does the occurring of AGI lead to the need for UBI?

dizzydes 6 hours ago | parent | next [-]

I'm assuming there won't be more meaningful work for most of population to do that AI can't do. Some people think the opposite. That seems to be the main point of contention.

brador 3 hours ago | parent | prev [-]

Solve for bread or the circus is free.

general1465 6 hours ago | parent | prev | next [-]

And some hybrid of capitalism and socialism eventually will happen. Target would be to prevent rich few from hoarding wealth and force them to put it back into economy. Otherwise people with nothing to lose will just repeat social revolutions from 19-20th century.

stego-tech 7 hours ago | parent | prev [-]

Because too many HN folk see that word and recoil as they're only casually "familiar" with human attempts at it during the era of scarcity as told by mass media, with no understanding as to the reality of why said systems failed or succeeded.

In a post-scarcity society (which we're technically in now, if we took this seriously), Communism is a more appropriate model of governance than Capitalism. It would ensure a more equitable distribution of resources, incentivize stronger environmental policies to minimize waste, and drive technological innovation towards preservation (of truly scarce resources - rare elements, for instance) over extraction.

The problem is that humans desire power for themselves and the humiliation of others, which results in every method of governance becoming corrupted over time, especially if it doesn't see regular change to address its weaknesses (as we see now with neoliberal societies resisting populism on both extremes of the political scale). Combined with centuries of nationstates lumbering onwards and fighting for their own survival in an increasingly nebulous and ever-shifting digital landscape, and no wonder things are a tinderbox.

All that being said, Communism is an (maybe not the, but an) appropriate choice for a post-scarcity, post-AGI society. It's something we need to discuss in earnest now, and start dismantling Capitalism where feasible to lay the foundation for what comes next. As others (myself included) have pointed out repeatedly, this is likely the last warning we'll get before AGI arrives. It's highly unlikely LLMs and current technology will give rise to AGI, but it's almost a certainty that we'll see actual glimmers of AGI within the next fifty years - and once that genie is out of the bottle, we'll be "locked in" to whatever society we've created for ourselves, until and unless we leave our planet behind and can experiment with our own alternatives at scale.

Good craftsmen know when they've reached the limits of their current tooling. We need to recognize that Capitalism is the wrong tool for an AGI era if we value our humanity.

Ray20 6 hours ago | parent [-]

>In a post-scarcity society

Human needs unlimited. There can't be any "post-scarcity society".

The transition point to a post-scarcity society is in the eyes of the beholder, and moves away from them at the same speed with which they approach it.

From the perspective of the hundreds of millions of people working for 10 cents an hour, any American, even the poorest of them, whose only available job is a minimum wage of $8 an hour, has long since passed that point of post-scarcity society.

But try convincing minimum wage American that he's beyond that point and that he needs to give up $6 out of $8 because "there are no scarcity after $2 per hour". Then you will know the real opinion of people about "more equitable distribution of resources, incentivize stronger environmental policies to minimize waste, and drive technological innovation towards preservation"

stego-tech 5 hours ago | parent [-]

If I'm understanding your broken sentences correctly, you're seemingly trying to parrot the same "insatiable appetite of humanity" that all proponents of Capitalism like to trot out as some sort of defense of the (otherwise) indefensible; same with your misleading comparison of income and cost of living across national boundaries.

The fundamental needs of humanity aren't infinite: a safe home, nutritious food, healthcare, and education are the sum total of human needs. Everything else is superfluous to survival, albeit not self-fulfillment or personal enrichment. We're post-scarcity in the sense that, on a planetary scale, we have enough food, shelter, healthcare, and education for every single inhabitant on Earth, but Capitalism incentivizes the misuse of these surplus resources to create value for existing stakeholders.

This is where I flatly reject any notion of Capitalism being viable, suitable, or acceptable in a post-AGI society, and rail against it in the present day. Its incentives no longer align with human needs or challenges, and in fact harm humanity as a whole by promoting zero-sum wealth extraction rather than a reconciling of the gap between human needs and Capital desires. As much pro-Capitalism content as I consume in an effort to better my perspective, the reality is that it is rapidly outliving its usefulness as a tool like a shambling zombie, wholly divested from human survival and soldiering onward solely as a means to prop up the existing power structures in existence.

Avicebron 8 hours ago | parent | prev | next [-]

I wonder if "AGI" is going to end up like quantum computing. With expectations and predictions so unmoored from reality, that everyone just sort of pretends it's a thing without every actually genuinely thinking about what's going on.

Edit: words

jordanb 7 hours ago | parent | next [-]

The history of AI since the 1960s is slow and incremental improvement where the public loses interest for a decade or so, then notices the last decade of improvement when someone released a glitzy demo, followed by an investment frenzy with a bunch of hucksters promising that hal 9000 is two years away, followed by the zeitgeist forgetting about it for another decade-ish.

This has happened at least five times so far.

cogman10 7 hours ago | parent [-]

I'd say we are getting pretty close to the "now or never" point of AGI.

We are pretty close to the limits of fabrication for transistors. Barring radically different manufacturing and/or ASIC development the performance we have today will be the performance available in 10 years (I predict we'll maybe 2x compute performance in 10 years).

If you've paid attention, you've already seen the slowdown of compute development. A 3060 GPU isn't really significantly slower than a 5060 even though it's 5 years old now.

wood_spirit 7 hours ago | parent [-]

A human neuron is a thousand times bigger than a transistor.

There are directions hardware and algorithms have been going in - parallel processing - that are not limited by fabrication?

cogman10 7 hours ago | parent | next [-]

> A human neuron is a thousand times bigger than a transistor.

Correct, it works on principles currently completely unapplied in ASIC design. We don't, for example, have many mechanisms that allow for new pathways to be formed in hardware. At least, not outside of highly controlled fashion. It's not clear that it would even be helpful if we did.

> There are directions hardware and algorithms have been going in - parallel processing - that are not limited by fabrication?

They are limited by the power budget. Yes we can increase the amount of parallel compute 100x but not without also increasing the power budget by 100x.

But further, not all problems can be made parallel. Data dependencies exist and those always slow things down. Further, coordination isn't free for parallel algorithms.

I'm not saying there's not some new way to do computation which hasn't been explored. I'm saying we've traveled down a multi-decade path to today's compute capabilities and we may be at the end of this road. Building a new model that's ultimately adopted will (likely) take more decades. I mean, consider how hard it's been to purge x86 from society. We are looking at a problem a million times more difficult than just getting rid of x86.

oidar 7 hours ago | parent | prev [-]

Transistors will never reach the efficiency of a neuron. A transistor is too limited in it's connections.

antegamisou 7 hours ago | parent | prev [-]

This is easily the case for most laypeople, in my experience at least. Plenty of people fairly taken aback by GenAI's capabilities, some of them have genuinely expressed concern for human intelligence extinction very soon.

wood_spirit 7 hours ago | parent | prev | next [-]

I imagine we know we have reached AGI when a technocrat stops sharing their AI. It goes from being something they can sell to something that they don’t want to share. Instead they ask it how they can dominate the world and be the first trillionaire and how they can stop anyone else acquiring an AGI etc.

This even works at a smaller not so general level: imagine that one of today’s popular code models improved to the point it is better (narrowly at programming) than a human. Suddenly the owner shouldn’t sell it to everyone: instead they should pivot and make software that outcompetes anything a human can make. So it doesn’t just replace humans making software but also replaces the programs that people made…

cjbarber 8 hours ago | parent | prev | next [-]

For more economics of AGI, see also:

This tweet recapping this paper https://x.com/lugaricano/status/1969159707693891972

This tweet with recaps of various papers presented at "The Economics of Transformative AI" by NBER in Palo Alto a few weeks ago https://x.com/lugaricano/status/1968704695381156142

stego-tech 7 hours ago | parent | prev | next [-]

I remain amused at classic Sci-Fi being reimagined as scientific papers. Authors have warned and celebrated the potentials of AGI on human society for decades if not millennia (if we include deities, gods, and other super-human machinations): we know full well that a post-AGI economy will be hell or paradise solely depending on the underlying civilizational structures when it is birthed into existence (hell if Capital reigns supreme and Nations remain in conflict, paradise if we get our species' collective shit together and consciously shift to a planetary societal mindset for economics and governance).

As for what humans should do when labor can be accomplished via AI? Man, if you're asking yourself that question in 2025, you're kinda late to the party in a lot of ways - and desperately need to read more books. AGI may eliminate the need for human labor in Capital terms, but it does not suddenly eradicate all value of human labor - only the value Capital ascribes to it. It's why, in a post-AGI fictional setting, you see so many more adventurers, artisans, explorers, researchers, engineers, teachers, and other roles often undervalued by Capital but highly valued by humans.

I imagine most humans will simply turn their favorite hobby into their new "profession", as a means to meet other humans and continue growing social bonds. Maybe that means painting full-time, or striking out on a photography journey. Maybe it's opening a retro game "store" to connect with like-minded enthusiasts. Maybe it's running your own museum for local artists and creators, sharing your tastes with other visitors.

A lot of "valueless" work under Capitalism suddenly becomes highly prized in a post-AGI, post-scarcity society. Most humans will figure that out to varying degrees, and the rest will be able to benefit from a wider array of fee-less services. Provided, of course, we begin changing society today to enable that future tomorrow.

dist-epoch 6 hours ago | parent [-]

You should also read a few more books. Capital doesn't need humans. We are it's temporary vessel.

stego-tech 4 hours ago | parent [-]

Your pithy comment and thinly-veiled insult don't withstand scrutiny before a simple mirror. If Capital does not need humans, then humans do not need Capital; viewed against another mirror, if Capital does not need humans, then where does Capital derive its value from if not the lease of its assets back to humans as a form of wealth extraction from the working class to the asset class?

If all humans are asset holders, then where does Capital arise? If all wealth has been extracted from the asset-less into the hands of Capital, then why does Capital need humans?

I can keep cutting your particular barb in any way I choose, but the answer remains the same: Capital, while a useful tool in improving society over the past few centuries by migrating away from Feudalism and Monarchy into Capitalism and Democracy, is no longer useful (or even ethical) now that its incentives are diametrically opposed to human prosperity for all except the asset class.

cogman10 7 hours ago | parent | prev | next [-]

Assuming AGI or something close to it hits, it'll likely be a nightmare for the average person.

The model in the paper talks about freeing the labor bottlenecks, which will be great for the capitalists who own the compute. Assuming we don't have a social safety net in place, that will ultimately mean that a lot of stuff is cheaper for the wealthy with nobody else being able to afford it.

What would a middle class job be? Almost all knowledge work would be obliterated. Engineering, science, maybe even art. All evaporated.

The paper suggests that people will shift into markets like manual labor. But what do we do when those jobs have no real bottleneck as the paper describes? There's only so many people needed to for care work or picking berries. And right now, it seems we have enough as the current salaries are pitiful in all the sectors the article mentions. What pressure would actually make wages increase? Sure not the fact that those are the only jobs that exist for normal people that weren't born to a family that owns a datacenter.

And it isn't like datacenter jobs are going to replace the army of jobs AGI would displace. You need so few people to operate a compute center.

That's why it'll likely be hell for most people. If you don't actually own resources, you'll be left out in the cold. Even if you do have pretty good resources, you'll be in a world with AGI set to perfectly extract every single bit of resource from you imaginable in ways we currently don't imagine. For example, knowing everything about you and knowing that you are willing to spend $11 for a widget while someone else is more willing to spend $10.

This will ultimately force the question of "what do we do with the unemployed" and I worry the answer is already "well, they should have worked harder. Sucks to suck".

visarga 7 hours ago | parent | next [-]

I think you jump from AGI to "human not needed" too abruptly. First of all, AGI might be smarter than you but you have to live with the consequences of using it. So you can't be removed from the loop. We need accountability, AI can't provide it, it has no skin. We need to act well in a local context, AI sits in the datacenter, not on the ground with us. Humans need to bring the context into the AI.

Hinton predicted 9 years ago that radiologists will lose their jobs, yet today they are better paid and more. Maybe AGI will make humans more valuable instead of less. There might be complementarities and mutual reinforcement between us and AGI.

cogman10 6 hours ago | parent [-]

> I think you jump from AGI to "human not needed" too abruptly.

No, I'm really just looking at what this paper proposes will be the future of labor and expanding on it. I'm not saying AGI will mean "humans not needed" I'm saying AGI will mean "less humans are needed" and in some cases that could be significantly less. If you've listened to any CEO gush over AI, you know that's exactly what they want to do.

> Hinton predicted 9 years ago that radiologists will lose their jobs, yet today they are better paid and more. Maybe AGI will make humans more valuable instead of less. There might be complementarities and mutual reinforcement between us and AGI.

Medicine is a tricky place for AI to integrate. Yet it is already integrating there. In particular, basically every health insurance agency is moving towards using AI to auto deny claims. That is a concrete case where companies are happy to live with the consequences even though they are pretty dire for the people they impact.

And, not for nothing 10 years is a pretty short time to completely eliminate an industry. The more we pay for radiologist, the more likely you'll start seeing a hospital decide that "maybe we should just start moving low risk scans to AI". Or you might start seeing remote radiologists for cheap willing to take on the risk of getting things wrong.

xienze 7 hours ago | parent | prev | next [-]

It’s very simple, you see. No one who doesn’t want to work will have to. There will be infinite UBI for everyone paid for by <waves hands> all those people who feel obligated to work for some reason.

cogman10 7 hours ago | parent | next [-]

UBI would have to be taken from the owners of compute to ever be possibly enough to fund things. Those will be the people with the vast majority of the resources.

And with today's political environment, I see that as particularly unlikely to ever happen. Everyone says "UBI" as a solution, but I've yet to see even the hint of that being tried. In fact, the opposite seems to be happening in the US with SSI (which is UBI) being slowly defunded and made less accessible.

testing22321 6 hours ago | parent | prev | next [-]

Australia has had that for decades. Anyone that doesn’t want to work gets free money from the gov, forever. Works fine.

danaris 7 hours ago | parent | prev [-]

You are very clearly being sarcastic, but the only reason that UBI like this wouldn't work in the scenario being described is because the already-mega-wealthy owners of the AGI hardware and software decide that they want to have all the money instead of actually making the promise of automation real for all of humanity.

And to be clear, there is no conceivable universe in which that extra money would make their lives better in any meaningful way.

They could support high taxes on the money they earn through the AGI, to fund a UBI that would support literally everyone—because their products are doing literally all the work necessary to maintain a civilized society (barring some in-person tasks that it's hard to hand over to even a very smart robot) without any human actually needing to do any work. They could do so without making themselves poor, or even the least bit less comfortable.

The reason they would choose not to is because they're corrupt selfish "rugged individualists" who care more about their dollar-denominated high scores than about literally any other human being on the planet. And we know this because that's the case with the people in the closest-analog positions today.

Theodores 6 hours ago | parent | prev [-]

In the 1980s, at my particular school in the UK, we were being primed for a society where computers and machines would do so much work that we would only have to work for ten hours a week or so. Therefore, in the curriculum, we had lots of classes on leisure things, as in hobbies.

This world never came to be. Instead we had Graeber's 'McJobs', with ever more specialist roles as per the capitalist division of labour idea, with lots of important jobs that aren't important at all.

We had a glimpse of a 'leisure society' during the pandemic when only key workers were needed to do anything useful, everyone else was on the government furlough.

In theory, AGI offers the prospect of a leisure society of sorts. However, it doesn't. Going back to the unusual curriculum at the school I happened to go to, a key skill was critical thinking. As well as doing macrame, football, cookery, pottery, art and what not during what would be teaching time, we also had plenty of courses on philosophy and much else that can be bracketed as literature. The idea was that we were not being brought up to be compliant serfs for the capitalist machine, instead we were expected to have agency and be able to think for ourselves.

The problem with AGI is that we are bypassing our brains to some extent and at a cost of being able to master the art of critical thinking. Nobody has to solve a problem by themselves, they can ask their phone to do it for them. I could be wrong, however, I don't see evidence that AGI is making people smarter and more intelligent.

We have already outsourced our ability to recall information to search engines. General knowledge used to be something you had or you didn't, and people earned respect for being able to remember and recall vast amounts of information. This information came from books that you either had or had to access in a library. Nowadays, whatever it is, a search engine has got you covered. Nobody has to be a walking gazetteer or dictionary.

This ability to recall information rather than look up everything came with risks, mostly because it was easy to be wrong, or 'almost right', which can be worse. However, it is/was the bedrock for critical thinking.

Clearly the utopian vision of a leisure society never happened in the form that some envisaged in the 1980s. With AGI I don't see any talk of a leisure society where we are only having to put in ten hour working weeks. This isn't being proposed at all. If it was then AGI would not be a nightmare for the average person.

lerp-io 7 hours ago | parent | prev | next [-]

okay so what exactly is this "accessory" work that will be left to the humans lol

WesolyKubeczek 7 hours ago | parent | prev | next [-]

I think if AGI ever becomes real, we will all end up as Indiots of the Twenty-Fourth Travel of Ijon Tichy.

ordinaryradical 6 hours ago | parent | prev | next [-]

AGI is the new Marxism—a utopian dream unmoored from reality, which does not account for the nature of people, economics, states, or even nature itself. A fantasy society that will never come to pass but, if attempted by fanatics, will probably do great damage.

AndrewKemendo 7 hours ago | parent | prev | next [-]

Yet another paper using the term “AGI” without rigorously defining it.

There is a oblique definition embedded in the context that AGI is literally any labor transfer technology, but again without a distinction between measurement (which is possible) then these conversations and concepts are going to stay unmoored from reality

I made this google form link and so far the answers are all over the place including

- “you can’t” - “It Can ask: I, why?” (Odd answer)

and my personal favorite: “It can do everything better than humans with no prior information” (impossible according to no free lunch theorem):

https://howdoyoudefineagi.com/

philipkglass 7 hours ago | parent | prev [-]

As labor ceases to be the primary driver of value, economic policy must confront a basic question: how can we share the income generated by compute? In a world where AGI performs all bottleneck work, income generated by production flows to those who own or control computational resources. One approach is to redistribute the gains from compute through universal dividends. An alternative is to reimagine compute as a public or semi-public resource, akin to land or natural capital, with returns broadly shared.

It might be clearer to say "how can we share goods and services provided by intelligent machines?"; the paper seems overly focused on compute as "the" bottleneck when natural resources are still needed. (Even AGI can't magic up new copper atoms from scratch, though it can exploit low grade ores that were previously economically useless.)

I think that referring to income instead of goods and services predisposes people to think of this in a currency-centered way, when currency-denominated market transactions may become much less important in a world of intelligent machines. If (v) the share of labor income in GDP converges to zero is actually true, then machines can do everything, including copying other machines. Co-ops, municipalities, provinces, and states can vertically integrate to provide goods and services outside the market if intelligent machines are actually doing the work. Compare the old "buy an OS, buy a database, buy a compiler" approach that one had to take circa 1990 with the "copy a free Linux distribution" approach circa 2000.

If wage income is obsolete, so is intellectual property rent. One common fear in these discussions is that billionaires with robots will let the masses die of starvation when their labor is no longer needed, but the billionaires who own IP don't have much leverage either.

For one, most of the world's population/governments aren't beholden to IP billionaires now. It might seem like that to those of us who live in the Anglosphere (a plurality of HN users), but globally speaking most billionaires didn't make their money from IP. They'll back national movements to ignore copyright and patents to Just Copy The Foreign Robots when the time comes. (South Africans who were wealthy from mining didn't have an incentive to side with foreigners who were wealthy from pharma patents back when South Africa was ignoring IP to fight HIV/AIDS, as a parallel example). For another, billionaires aren't even in charge everywhere. See the example of China with Jack Ma, or the fabulously wealthy oligarchs that have been brutally demoted in Putin's Russia. If a leader can accumulate power and rally popular support by giving free robot goods and services to the people, they will; the IP billionaires don't have anything to trump that offer.

For these reasons I don't worry about mass immiseration/starvation if smart robots actually take over all productive work. I'm sure there will be struggles, but I don't think that IP owners can win the fight any more than the MPAA actually ever eliminated movie piracy.

The thing that worries me more is mass empowerment of even the world's most inflexible and violent personalities. Nuclear weapons, ballistic missiles, and nerve gas are now old technologies. The main thing that prevents every angry separatist movement or cult from becoming armed like North Korea or Aum Shinrikyo is lack of material and technical resources. But in a world of smart robots, all you need to get your own enriched uranium and ballistic missiles is those smart robots, a region of several square kilometers that no outside force is policing, and a few years to bootstrap the precursor technologies that don't already have blueprints in public databases.

Mass material abundance probably means a decrease in "ordinary" crime driven by the stress of material deprivation but an increase in tail risks from unhinged individuals. The sort of person who kills 5 strangers with a gun in the name of their ideology could become a person who shoots down an airliner, killing 200, in the name of the same ideology.

walleeee 7 hours ago | parent [-]

The compute-oriented myopia is less of a bug than a feature of the underlying mythology. The promise of abundance on the horizon is a central pillar in the narrative's self-justification and its plausibility requires energy blindness