| ▲ | ernst_klim 6 hours ago |
| > to eliminate "useless eaters" It can't. It can't even deal with emails without randomly deleting your email folder [1]. Saying that it can make decisions and replace humans is akin of saying that random number generator can make decisions and can replace people. It's just an automation tool, and just like all automation tools before it it will create more jobs than destroy. All the CEOs' talks about labor replacement are a fuss, a pile of lies to justify layoffs and worsening financial situation. [1] https://www.pcmag.com/news/meta-security-researchers-opencla... |
|
| ▲ | MarcelOlsz 6 hours ago | parent | next [-] |
| People have this misconception that first it was one way, and then <tech was released>, and they'll wake up and suddenly it is another. It's a slow creep. 10 years ago there were 5 of us on a team each responsible for something specific. Now I can do all of that. Teams and companies will downsize. How do you see AI creating more jobs? (I need some hope right now lol). |
| |
| ▲ | mplanchard 5 hours ago | parent | next [-] | | My hope is that there is a sort of Cambrian explosion of small software projects built by people who have absolutely no clue what they're doing. Many such projects will go nowhere, but some percentage of them will see success and growth. My second hope is that there will always come some threshold of complexity beyond which AI cannot effectively iterate on a project without (at minimum) the prompting of an expert in the field. The combination of these two things could lead to a situation where there is a massive, startup-dominated market for engineers who can take projects from 0.5 to 1, as well as for consulting companies or services that help founders to do the same. Another pair of hopes is that a) the LLM systems plateau at a level where any use on complex or important projects requires expert knowledge and prompting, and b) that because of this, the hype of using them to replace engineers dies down. This would hopefully lead to a situation where they are treated like any other tool in our toolbox. Then, just like no one forces me to use emacs or vim (despite the fact that they unambiguously help me to be at least 2x more productive), no one will force me to use LLMs just for the sake of it. | |
| ▲ | treis 5 hours ago | parent | prev | next [-] | | It's made it cheaper to do whatever it is you did therefore the demand for it will go up. It's somewhat of an open question of where the new equilibrium is. Historically that can go either way. We have fewer farmers that we once did because there's a limit to how much food people will eat. But we probably don't have fewer carpenters as a result of power saws and nail guns. We probably have more because the demand to build things out of wood is effectively unbound. | |
| ▲ | wilsonnb3 5 hours ago | parent | prev | next [-] | | Massive job loss from AI requires one of two things: actual human-equivalent AGI or no increase in demand. Focusing on option 2 and software development, teams and companies will only downsize if the demand for software doesn’t increase. Make the same amount of stuff you do now but with less people. What I think will happen is that enough companies will choose to do things that they couldn’t afford or weren’t possible without AI (and new companies will be created to do the same) to offset the ones that choose to cut costs and actually increase the amount of people making software. I am pretty sure these are well known economic ideas but I don’t know the specific terminology for it. | | |
| ▲ | the_af 4 hours ago | parent [-] | | There are more options: Mass unemployment, consolidation of all AI-related benefits in the hands of a few, an increase in demand that doesn't outpaced the loss of employment, increase in capabilities (not AGI) that mean a few chosen people can do most things without hiring other people, etc. |
| |
| ▲ | nradov 5 hours ago | parent | prev [-] | | A few hundred years ago it took a team of 5 plus draft animals plough a field. Now one guy with a tractor can do it. Some teams and companies will downsize. New companies will appear doing things that we can't even imagine yet. | | |
| ▲ | drivebyhooting 4 hours ago | parent | next [-] | | Are SWEs the farmers of the draft animals in this analogy? | | | |
| ▲ | bluefirebrand 5 hours ago | parent | prev [-] | | > New companies will appear doing things that we can't even imagine yet. I read this take a lot but I don't buy it. This isn't guaranteed by any means. And even if it does happen, isn't it just as likely that AI is deployed into those companies too and they don't actually result in any job growth? | | |
| ▲ | nradov 4 hours ago | parent [-] | | You don't need to buy it. There are no guarantees in life. Get comfortable with being uncomfortable. | | |
| ▲ | lazyasciiart 2 hours ago | parent | next [-] | | This comment equates to saying “I don’t care what you think”, and is a perfect example of something that is literally never justified to say on a forum where you have no requirement to interact with them. If you don’t care what individual people think then simply don’t talk to them. | |
| ▲ | the_af 4 hours ago | parent | prev [-] | | That's not the rebuke you think it is. You made a claim (not original, I've read it before), someone expressed doubts about your claim (which if proven false, will have dire consequences) and you cannot wave it off with "there are no guarantees in life". Sorry, you made a claim, there's good reason to believe your claim may not pan out, and if it doesn't the consequences are dire. | | |
| ▲ | nradov 3 hours ago | parent [-] | | I don't think it's a rebuke. I'm just explaining the reality of the situation. | | |
| ▲ | bluefirebrand 3 hours ago | parent [-] | | You said > New companies will appear doing things that we can't even imagine yet I have a really big imagination, so I will believe it when I see it. If you have any real idea what these new companies might be doing in the future then I'm all ears. But until then maybe stop trying to claim some kind of future knowledge based on some handwaved nonsense like "we can't even imagine what the future will look like" And then trying to claim that's "the reality of the situation", please be serious Edit: Maybe if you think the future is so unimaginable, you should take a look around at the present. Can you identify anything in our lives today that was not imagined by anyone in the past? Think about how every piece of technology ever made nowadays, someone can say "it's like the Torment Nexus from Famous Piece of Literature!" |
|
|
|
|
|
|
|
| ▲ | MisterTea 4 hours ago | parent | prev | next [-] |
| > It can't. It can't even deal with emails without randomly deleting your email folder [1]. And early cars were expensive, dangerous, highly unreliable, uncomfortable, belched foul exhaust, and required knowledge of how to drive AND maintain them. We are far, far from that scenario these days. |
| |
| ▲ | fl4regun 3 hours ago | parent [-] | | That's not proof that it will ever do those things in the future either, however. | | |
| ▲ | MisterTea a minute ago | parent [-] | | We have no proof what it will do in the future. I'm just maintaining the car analogy theme. |
|
|
|
| ▲ | the_af 4 hours ago | parent | prev [-] |
| > It can't. It can't even deal with emails without randomly deleting your email folder [1]. Saying that it can make decisions and replace humans is akin of saying that random number generator can make decisions and can replace people. I don't think the comment you're replying to is saying that an evil AI bot will kill people. They are saying something along the lines of: mass job loss doesn't bother the AI companies because in the AI-powered future they envision, population reduction is a positive side effect. |