Remix.run Logo
bborud 3 hours ago

> The AI is coming for that too.

To some degree yes, in practice, not so much. In practice you have to be in the world, talk to people, know how to talk to people, know how to listen, and be able to understand the difference between what people say and what they actually need. Not want. Need.

This was something I learnt in my very first job in the 1980s. I worked for someone who did industrial automation beyond PLCs and suchlike. He spent 6 months working in the company. On the factory floor, in the logistics department, in procurement, in accounting and even shadowing the board. Then he delivered a proposal for how to restructure the parts of the company, change manufacturing processes, and show how logistics and procurement could be optimized if you saw them as two parts in a bigger dance.

He redesigned the company so that it could a) be automated, and b) leverage automation to increase the efficiency of several parts of the business. THEN, he started planning how to write the software (this was the 80s after all), and then we started implementing it.

Now think about what went into this. For instance we changed a lot of what happened on the factory floor. Because my boss had actually worked it. So he knew what pain points existed. Pain points even the factory workers didn't know how to address because they didn't know that they could be addressed.

I was naive. I thought this was how everyone approached "software projects". People generally don't. But it did teach me that the job isn't writing code. It is reasoning about complex systems that often are not even known to those who are parts of it.

And this is for _boring_ software that requires very little creativity and mostly zero novelty. Now imagine how you do novel things.

> People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away. Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.

You make it sound like it is a bad thing that certain tasks become easier.

I spent a lot of time writing CRUD stuff. Because the things i really want to work on depend on them. I don't enjoy what is essentially boilerplate. Who does? If you can do the same job in 1/20 the time, then how is this a bad thing?

It is only a bad thing if writing CRUD webapps is the limit of your ability. We don't argue for banning excavators because it puts people with shovels out of work. We find more meaningful things for them to do and become more productive. New classes of work becomes low-skilled jobs.

If you have been doing software for a while, you are probably doing some subset of this. But these things are hard to articulate. It is hard to articulate because it is not something we think about. Like walking: easy for us to do, hard to program a robot to do it.

coldtea 2 hours ago | parent | next [-]

>To some degree yes, in practice, not so much. In practice you have to be in the world, talk to people, know how to talk to people, know how to listen, and be able to understand the difference between what people say and what they actually need. Not want. Need.

1 person needs to do that. The other 100 not doing that currently to begin with, but doing the AI-automatable work?

SoftTalker 3 hours ago | parent | prev [-]

> To some degree yes, in practice, not so much.

We used to say that (not long ago, even) about the code-writing part. Why do we believe that LLMs are going to stop there? Why do we think they won't soon be able to talk to people, listen, and determine what they need? I think it's mostly a cope.

We have robots walking just fine now, by the way.

sarchertech 3 hours ago | parent | next [-]

If they can do those things they can effectively replace any white collar job. That’s about 45% of the workforce. Societies tend to collapse around 25-30% unemployment.

Imagine 45% of higher than average paying jobs gone.

If that happens we’ll either figure out a new economic system, or society will collapse.

Also saying robots are walking just fine is misleading for any definition of just fine that is anywhere near as good as a human.

ryandrake 2 hours ago | parent | next [-]

Look at how the billionaires are talking about AI: Their clear, unambiguous goal is basically to replace all white collar "knowledge" jobs. And there's currently nothing regulatory that's stopping them--they just need to wait for the state of the art to improve. Once AI is "good enough" if it ever is, they won't even think twice about 45% unemployment. What are we unemployed workers going to do about it? There's no effective labor organization left. Workers have basically no political power or seat at the table. We're not going to get violent--the police/military are already owned by the billionaire class. We're just going to eventually become economically irrelevant and die off.

geodel 2 hours ago | parent | next [-]

> We're just going to eventually become economically irrelevant and die off.

As harsh it may sound, it seems rather likely to me. It is not like s/w engineers have helped struggling workers in other sectors other than sanctimonious "Learn to code" advice. So software folks can't expect any solidarity or help from others.

kiba 2 hours ago | parent | prev | next [-]

The fundamental issue isn't unemployment due to automation, but the fact that society cannot benefit from unemployment.

It should be something for us to celebrate, because it means greater freedom for humans to pursue something else rather than spending time doing drudgery.

shinryuu an hour ago | parent [-]

Put it another way, the issue is that resources are not shared more equitably. This is especially egregious considering that LLMs are trained on all human knowledge. We've all been contributing to this enterprise, and what we may end up getting in return is unemployment.

monknomo 2 hours ago | parent | prev | next [-]

45% of folks sitting on their hands are going to have the free time to talk, and this group of people are skilled at organization. Are you planning on throwing your hands up and passively accepting whatever comes your way?

rootusrootus an hour ago | parent [-]

And at least in the US they have >45% of all the small arms weaponry. There is no bunker strong enough nor private army big enough if 100M people come for you.

ryandrake 39 minutes ago | parent [-]

They're probably be betting that the technology they will need to defend their bunkers, think autonomous kill-bots or whatever, will emerge before people start to riot.

Or they're planning to build an Elysium-like colony in the ocean or space, to keep the billionaire class far from danger.

rootusrootus an hour ago | parent | prev | next [-]

I get that it is popular to hate billionaires these days, but realistically, they did not get to be billionaires by being stupid. It runs directly counter to their own interests to induce anything like 45% unemployment. They will get poorer, the world they live in right along with the rest of us will get noticeably shittier, etc.

More likely they figure out what to do with a bunch of idle talent. Or the coming generation of trillionaires will.

2 hours ago | parent | prev [-]
[deleted]
BurningFrog 2 hours ago | parent | prev [-]

It's important (and calming) to understand that since the Industrial Revolution started ~250 years ago, we've automated away most jobs several times over, while employment levels have stayed pretty constant.

"Automating half the jobs" is the same as "double productivity per worker".

When the doubling happens in 5 years rather than 50, it might be more disruptive, but I'm convinced we're on the verge of huge improvements in human standard of living!

wartywhoa23 2 hours ago | parent [-]

What in the current state of world affairs outside of IT do you think is indicative of that potential for huge improvements in human standard of living?

bborud 3 hours ago | parent | prev | next [-]

We never noticed how easy the code writing part had already become because it happened slowly. Through mechanical means, through the ability to re-use code, and through code generation.

Heck, even long before LLMs about 10% to 30% of my code was already automatically generated. By tooling, by IDLs and by my editor just being able to infer what my most likely input would be.

> We have robots walking just fine now, by the way.

I don't think you got the point I was trying to make.

SoftTalker 2 hours ago | parent [-]

True, but I guess I see a distinction between scaffolded/templated boilerplate or autocomplete and actual application logic. People have generated boilerplate from templates for ages, as you say. RoR maybe a pretty good example, but there wasn't even early-days AI involved in doing that.

phkahler 2 hours ago | parent | prev | next [-]

>> We used to say that (not long ago, even) about the code-writing part. Why do we believe that LLMs are going to stop there? Why do we think they won't soon be able to talk to people, listen, and determine what they need?

Because they are currently "generative AI" meaning... autocomplete. They generate stuff but fall down at thinking and problem solving. There is talk of "reasoning models" but I think that's just clever meta-programming with LLMs. I can't say AI won't take that next step, but I think it will take another breakthrough on the order of transformers or attention. Companies are currently too busy exploiting the local maxima of LLMs.

rootusrootus an hour ago | parent [-]

> Companies are currently too busy exploiting the local maxima of LLMs

I get the feeling we can already spot the next AI Winter. Which is okay, we need a breather, and the current technology is useful enough on its own.

terseus 3 hours ago | parent | prev [-]

> Why do we believe that LLMs are going to stop there?

Why do you believe they wont? I think it's reasonable to assume that we will hit a ceiling that current models will not be able to break.

> We have robots walking just fine now, by the way.

Walking and reasoning are unrelated abilities.

SoftTalker 3 hours ago | parent [-]

Walking was given as an example of "hard to program a robot to do it" by GP. Well, now we have robots that can walk.

What evidence is there that LLMs have hit a ceiling at being able to do things like talk to users or stakeholders to elicit requirements? Using LLMs to help with design and architecture decisions is already a pretty common example that people give.