Remix.run Logo
Artificial Intelligence and the Future of Work(nap.nationalacademies.org)
49 points by ckcheng 3 days ago | 30 comments
hi_hi 2 days ago | parent | next [-]

Is this article an accurate reflection of peoples experience, or more generic LinkedIn click bait? I'm assuming the later with content like

>Substantial and ongoing improvements in AI’s capabilities, along with its broad applicability to a large fraction of the cognitive tasks in the economy and its ability to spur complementary innovations, offer the promise of significant improvements to productivity and implications for workforce dynamics.

I keep waiting for the industry shifting changes to materialise, or at least begin to materialise. I see promise with the coding tools, and personally find Claude and Cursor like tools to warrant some of the general hype, but when I look around for similar changes in other tangentially related roles I draw a blank. Some of the Microsoft meeting minute summaries are good, while the transcripts are abysmal. These are helpful, but not necessarily game changing.

Hallucinations, or even the risk of hallucinations, seem like a fundamental show stopper for some domains where this could otherwise be useful. Is this likely to be overcome in the near future? I'd assume it's a core area of research, but I know nothing of this area, so any insights would be enlightening.

What other domains are currently being uplifted in the same way as coding?

Nevermark 2 days ago | parent [-]

I think the analysis is forward looking.

> This technical progress is likely to continue in coming years, with the potential to complement or replace human labor in certain tasks and reshape job markets. However, it is difficult to predict exactly which new AI capabilities might emerge, and when these advances might occur.

The "small" benefits you list are in fact unprecedented and periodically improving (in my experience).

The generality and breadth of information these models are incorporating was science fiction level fantasy just two years or so ago. The expanding generality and context windows, would seem to be a credible worker threat indicator.

So it is not unsensible to worry about where all this is quickly going.

swatcoder 2 days ago | parent [-]

> The "small" benefits you list are in fact unprecedented and periodically improving (in my experience).

It's only the mechanism that's unprecedented, cementing these new approaches as a state of the art evolution for code completion, automatic summarizing/transcription/translation, image analysis, music generation, etc -- all of which were already commercialized and making regular forward strides for a long while already. You may not have been aware of the state of all those things before, but that doesn't make them unprecedented.

We actually haven't seen many radical or unprecedented acheivements at commercial scale at all yet, with reliability proving to be the the biggest impediment to commercializing anything that can't rely on immediate human supervision.

Even if we get stuck here, where human engagement remains needed, there's a lot of of fun engineering to do and a number of industries we can expect to see reconfigured. So it's not nothing. But there's really no evidence towards revolution or catastrophe just yet.

Nevermark 2 days ago | parent [-]

> It's only the mechanism that's unprecedented

I think this is correct and also the point.

Neural networks, deep learning modes, have been reliably improving year to year for a very long time. Even in the 90's on CPUs, the combination of CPU improvements and training algorithm improvements translated into a noticeable upward arc.

However, they were not yet suitable for anything but small boutique problems. The computing power, speed, RAM, etc. just wasn't there until GPU computing took off.

Since then, compounding GPU power, and relatively simple changes in architecture have let deep learning rapidly become relevant in ... well every field where data is relevant. And progress has not just been reliable, but noticeably accelerated every few years over the last two decades.

So while you are right, today's AI varies from interesting, to moderately helpful but not Earth shattering in many areas, that is what happens when a new wave of technology crosses the threshold of usability.

Past example: "Cars are really not much better than horses, and very finicky." But the cars were on a completely different arc of improvement.

The limitations of current AI models aside, their generality of expertise (flawed as it might be), is unprecedented. Multi-modal systems, longer context windows, and systems for improving glitchy behavior are a given, and will make big quality differences. Those are obvious requirements with relatively obvious means.

We are going to get more than that going forward, just as these models have often been surprisingly useful (at much lower levels and narrower contexts) in the far and recent past.

This train has been accelerating for over three and a half decades. It isn't going to suddenly slow down because it just passed "Go". The opposite.

CaptainFever 2 days ago | parent | prev | next [-]

As a layman (as with most people here), I think this is a good article that summarises the current research on AI's impact on labour markets. The website itself seems like a reliable source.

These points made sense to me: it is impossible to predict what will actually happen, we need better pro-level tools for AI assistance (e.g. Copilot, writer autocomplete, ControlNet) rather than AI as a full replacement, and we need better and clearer paths to retraining and job mobility.

I disagreed with only one point in there: that research is needed for ways to compensate people for the use of their creative works, but that is solely because of my pro-free-cultural moral views. The rest of the article is still good.

killjoywashere 2 days ago | parent [-]

> pro-free-cultural moral views

Mike Montero would like a word

"Who in this room is now, or has at some time, been in creative services?

"Who here has, at some time, had trouble getting paid by a client for work they were doing?

"Raise your hand if any of these are familiar to you:

"'We ended up not using the work.'

"'It's really not what we wanted after all.'

"Alright. Who's familiar with Goodfellas?

"Alright. 'We got somebody internal to do it instead.'

"'Fuck you. Pay me.'"

"'Fuck you. Pay me.'"

https://www.youtube.com/watch?v=jVkLVRt6c1U

CaptainFever 2 days ago | parent | next [-]

Hmm. I recognise the similarities ("pay me") but I also see differences ("pay me for work done", as in the video vs "pay me for replicating my work", as in IP laws).

Free culture isn't against the former (therefore this video doesn't actually address the point), but is against the latter, as being restricted from replicating work harms culture and innovation as a whole (e.g. memes and fan art being technically illegal), and imposes a large cost on the public.

That said, I'm not fully against IP laws, just that it should be limited to 14 years and only in situations where it is necessary for the production of it in the first place (e.g. articles behind paywalls). I believe I have a right to an opinion on this as a member of the public, as IP laws are a compromise between the public and the creators. It's not some natural human right.

In this moral view, if AI trains on my HN comment for example, copyright shouldn't come into play because I didn't require it to produce this comment. I had other incentives to write this comment.

As a counter-example, no one cares about statistical analysis (what AI is) when it's just building a corpus, doing classification, or even generating GPT-2 level text etc. It's only when it becomes a threat to jobs when people panic. This reveals the real problem: it is about jobs, not data. And so the solution: financial support, equal education and job retraining. Not expanding copyright laws to cover analysis as well.

killjoywashere 2 days ago | parent | next [-]

> This reveals the real problem: it is about jobs, not data.

I think you're selling "it is about jobs, not data" a little short here.

Let me start off by saying I work on AI for healthcare, my first IRB protocol that contemplated computer vision is from 2012. I'm not an OG, but I've been working on this stuff for a while, and I'm very bullish.

On the flipside, my wife is a pediatric occupational therapist, she works with autistic kids. Kids in the Bay Area. Her clients are Google machine learning engineers. The major issue with jobs is not so much the monetary value of the work, although that's an important secondary outcome.

Humans need an occupation. Occupation is the purpose of life. Long before money was a thing, we needed purpose. Even kings and their courtiers, before money, needed occupations. We need to be doing something. People change their occupations from time to time. It doesn't have to be a job. It could be a hobby, in some cases. Even kids want to contribute. Even infants, as soon as they understand and can, will reach out to console or delight their caregiver. But we need to feel like we are giving back. Our minds and muscles atrophy if we are not occupied.

If we attach everything to solar power and let the machines run the world, we'll end up some version of the blob people in Wall-E's Buy n Large spaceships. The reward at the end of the movie was the people getting off the space ship, thanking the robots for giving them back purpose, mainly to restore their planet.

Purpose and occupation are about so much more than money.

mistrial9 2 days ago | parent | prev [-]

> being restricted from replicating work harms culture and innovation as a whole

I believe that there is no "one size fits all" answer to this, and failing to get in front of that in discussion, harms the resulting thought. The societal situation of an individual using their talent to make arts or design, alone or in self-selected teams, is not the same as large companies who run market systems and have attorneys and accountants to aid that over time.

Lots of excited people argue against copyright and then go directly to the story of the Mickey Mouse image.

It is precisely because copying is so, so different than creating, that the situations within the breadth of this topic are not, and should not, be comparable.

"A great society is judged by how it treats the least of its members" very much applies to working arts and crafts adults IMHO.

a day ago | parent [-]
[deleted]
2 days ago | parent | prev [-]
[deleted]
jmyeet 2 days ago | parent | prev | next [-]

AI is just one part of a larger and longstanding conversation about the future of work in an era of automation. We've long speculated that at some point we won't need the entire population to do all the work. Economists have talked about 20% of the population doing the work.

This can go one of two ways:

1. Fewer jobs will be used to further suppress wages. What little wages people earn will be used for essentially subsistence living. The extreme end of this is like the brick kiln workers in Pakistan, India and Bangladesh. A lot of people, myself included, call this neofeudalism because you will be a modern day serf. The welath concentration here will be even more extreme than it is now. We're also starting to see this play out in South Korea; or

2. The created wealth will elevate the lowest among us so work becomes not required but a bonus if you want extra. The key element here is the removal of the coercive element of capitalism.

To put this in perspective, total US corporate profits are rapidly approaching $4T per quarter. That's roughly $60,000 per US adult. Some would call that the exploited surplus labor value.

Here's another number: we've spent something like $10T on the War on Terror since 9/11. What could $10T buy? Quite literally everything in the United States of America other than the land.

What's depressing is that roughly half the country is championing and celebrating our neofeudalist future even though virtually none of them will benefit from it.

2 days ago | parent [-]
[deleted]
xnx 2 days ago | parent | prev | next [-]

I'm pretty certain that one of the first things we'll see is more jobs recording worker activity (computer activity, calls, video recording) as training data for future automation. Data from teleoperation of robots would be especially useful for physical tasks.

CuriouslyC 2 days ago | parent [-]

That data is valuable beyond just future training. You can automate a lot of management using that information.

onemoresoop 2 days ago | parent [-]

I think it’s not management but the source of that data that will get eliminated first.

chinabot 3 days ago | parent | prev [-]

Speculation can be enjoyable, but given the rapid pace of AI advancements, where today's capabilities may be obsolete within a year, it's wise to approach any claims with a healthy dose of skepticism.

a3w 3 days ago | parent [-]

Are any products using LLMs on the horizon, except for code completion? I have been a power user, hoping my workflows would improve. About every workflow got slower with statistical AI, and I am back to using logical AI like wolframalpha and bayesians.

mark242 2 days ago | parent | next [-]

There are entire categories of saas and enterprise vendors that are about to be completely blown away.

For example -- not long ago, when you wanted to do l10n/i18n for your business, you'd have to go through a pretty painful process of integrating with eg translations.com. If you're running an ecommerce site with a lot of new products (and product descriptions) coming online quickly, that whole process would be painful and expensive.

Fast forward to today -- a well-crafted prompt to Llama3.1 within a product pipeline makes that vendor completely obsolete. Now, you could argue that this kind of automation isn't new, you could have done it with an api call to Google translate or something similar, and sure, that's possible, but now you have one single interface into a very broad, capable brain to carry out any number of tasks.

If I was a vendor whose business was at all centered around language or data ETL or anything that involves taking text and doing something with it, I would be absolutely terrified at someone writing a 20-line python script with a good system prompt that would make my entire business's reason for being evaporate.

swatcoder 2 days ago | parent [-]

That's not the state of today at all, and probably doesn't represent the near or medium future.

Using the unmonitored output of a LLM-translation service for your commercial content, outputting in languages you can't read, represents a big reduction in quality assurance and greatly increases the risk of brand embarrassment and possibly even product misrepresentation, while leaving you with no recourse to blame or shift liability.

> If I was a vendor whose business was at all centered around language or data ETL or anything that involves taking text and doing something with it, I would be absolutely terrified at someone writing a 20-line python script with a good system prompt that would make my entire business's reason for being evaporate.

The more likely future is that existing translation houses will increasingly turn to LLM-assistance to raise the efficiency and lower the skill threshold for their staff, who still deliver the actual key values of quality assurance and accountability. This will likely drive prices down and greatly reduce how many people are working as translators in these firms, but it's an opportunity for them, not a threat.

LLM's don't seem to be on track to be the foolproof end-user tools that the earyl hype promised. They don't let us magically do everything ourselves and (like crypto being imcompatible with necessary regulations), they don't offer all the other assurances that orgs need when they hire vendors. But they can very likely accelerate trained people in certain cases and still have an impact on industry through specialty vendors that build internal workflows around them.

A4ET8a8uTh0 2 days ago | parent | next [-]

<< That's not the state of today at all, and probably doesn't represent the near or medium future.

Thank you for saying this. I briefly wondered if my particular company is just way behind or particularly dysfunctional and disorganized ( a possibility for sure ). I do agree with you observation on LLMs effectively lowering entry level skill ( yesterday I was able to dissect xml file despite it not being something I could normally do without any prep work and despite mildly unusual - I thought - formatting choices by vendor ). There was still a fair amount of back and forth for what some enthusiast would call 'perfect prompt' and interesting bugs that had to be addressed, but, having seen the daily mess at my company does not exactly make me a full blown evangelist. I see it more as a get to the wrong answer faster. That is the part that concerns me.

incrudible 2 days ago | parent | prev | next [-]

This sums up my view on AI and machine autonomy in general. The human added value is accountability. For a similar reason, outsourcing to faceless off shore companies usually does not work out.

There is nothing to suggest that AI will not require an expert in the loop in the future. Every single one of these products has a disclaimer that it will produce false and misleading results.

Of course, there are only so many experts needed for a given problem domain to fulfill all the demand, but that is true even without automation.

chinabot 2 days ago | parent | prev [-]

You may be right, but I would approach this with an open mind. Whether the trajectory of AI development remains an asymptote to human intelligence or surpasses it entirely, the increasing investment, involvement of diverse stakeholders, and growing stakes suggest that virtually every job role may face disruption or, at the very least, re-evaluation.

kevinmershon 2 days ago | parent | prev | next [-]

phone self-service systems, tutoring services, contract review, recruiting. Just to name a few

ben_w 2 days ago | parent [-]

> contract review

Yeah, no.

As part of a hilariously bad set of actions by a corporation that I had to threaten with legal action, I decided to try seeing what ChatGPT had to say, knowing in advance all the problems with it in this field, and… it was pretty much what I expected — enough to be interesting and get the general vibe right, but basically every specific detail that I could look up independently without legal training of my own, were citations of things that didn't exist.

I'd just about trust them on language tutoring, but even then only on the best supported languages.

Use them as enthusiastic free interns-to-juniors depending on the field. At some point, they'll be better, but not in predictable ways or on a predictable schedule.

But they are pretty general in their abilities — not perfectly general, but pretty general — so when they can do any of what you've suggested to a standard where they can make those categories unemployable, they're likely to perform all other text-based (not language-based, has to be text but doesn't have to be words) economic activity to a similar standard.

handfuloflight 2 days ago | parent [-]

Have you tried Claude (3.5 Sonnet)?

2 days ago | parent | next [-]
[deleted]
ben_w 2 days ago | parent | prev [-]

No, just Haiku 3.5, but I do like Anthropic's trained choice of personality more than the one I get from ChatGPT.

ToucanLoucan 2 days ago | parent | prev [-]

One word: spam.

AI has absolutely revolutionized spam and spam detection. Spammers can now generate absolutely unheard of amounts of complete bullshit. And on the other side, spam detection services and algorithms are getting better and better at detecting it, sorting it, and filtering it based on user preferences. Tons of people are enjoying openly AI generated content; and the content that isn't enjoyed by people is instead enjoyed by other AI bots, driving up the engagement rates. That behavior too though is being monitored by other AI, which then prompts spammers to improve their AI so they can avoid that AI and get their stuff seen by engagement AI.

So we have server farms full of computers that are making complete shit that is then thoroughly enjoyed by other server farms full of computers to drive up engagement numbers while still other server farms full of computers are working to detect the fraud and remove it.

Meanwhile, in the real world, we're still hurtling towards climate collapse. But that's okay, we're finally looking into building nuclear reactors again. To power more data centers.

The future is fucking stupid.

Hashex129542 2 days ago | parent [-]

AI is spam itself. If you go reddit, you can see lot of bots working for an Agenda.