Remix.run Logo
KronisLV 4 hours ago

> Yes, about 2-5% of the time.

There are also those for whom that percentage is higher, let’s say 6-50%.

> I understand things and then apply my ability to formulate solutions

The AI is coming for that too.

You might just be lucky to be in circumstances that value your contributions or an industry or domain that isn’t well represented in the training data, or problem spaces too complex for AI. Not everyone is, not even the majority of devs.

People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away. Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.

geodel 2 hours ago | parent | next [-]

Agree. It is just like 2 totally separate groups are arguing.

One very tiny slice of speciality/ rare industries where code is critical but overall small part of project costs. I can see if code / software is 5% of overall cost even heavy use of AI for code part is not moving the needle. So people in this group can feel confident in their indispensability.

Second group is much larger and peddling CRUD / JS Frontends and other copy/paste junk. But as per industry classification they are just part of same Coder/Developer/IT Engineer group. And their bleak prospects is not some future scenario, it is playing out right now with tons of them getting laid off. And whole lot of people with IT degrees, certifications are not finding any jobs in this field.

marcindulak 37 minutes ago | parent | next [-]

After hearing various similarly sounding opinions about CRUD being easy for LLMs, I started tracking how well LLMs handle a standard CRUD Django app I'm familiar with at https://github.com/marcindulak/learning-api-styles-gen-ai-ex....

So far it appears that LLMs still require constant hand-holding, even for a small educational CRUD app.

hjort-e 2 hours ago | parent | prev [-]

What makes you feel that a complex frontend would be easier for AI than a non-CRUD backend system?

evilduck an hour ago | parent | next [-]

Hubris.

I don't mean this as a snarky jab. It's coming for anything software. I've used AI to accomplish front end development and reverse engineer proprietary USB hardware dongles in C, then rewriting the C into Rust to get easy desktop GUIs around it. Backend APIs, systems programming, embedded programming, they all seem equally threatened it's just a matter of time. Front end is easy to see in the AI web front ends but everything else is still easy pickings.

hjort-e 35 minutes ago | parent | next [-]

I 100% agree it's coming for everything. I'm just curious what the arguments would be for why frontend would be easier.

skydhash an hour ago | parent | prev [-]

> I've used AI to accomplish front end development and reverse engineer proprietary USB hardware dongles in C, then rewriting the C into Rust to get easy desktop GUIs around it. Backend

That is not hard. It’s just tedious and very slow to do manually. The hard part would be about designing a usb dongle and ensuring that the associated software has good UX. The reason you don’t see kernel devs REing devices is not because it’s impossible or that it requires expert knowledge. It’s because it’s like counting sands on the beach.

geodel an hour ago | parent | prev [-]

It is irrelevant that complex frontend would be easy for AI or not. To me 1) how many unique complex frontends are needed out of total frontends that millions of sites out there need. 2) Will there be increase in need of such frontend engineers so other displaced folks can land a job there.

I think it will be far fewer to have any positive impact on IT engineers' overall job prospects.

hjort-e 39 minutes ago | parent [-]

But that's equally true for any type of system. Frontend isn't inherently easier than other systems, so i was just wondering why you singled it out. To me AI just seems better at backends and database design

geodel 21 minutes ago | parent [-]

OK, my examples seemed like biased against frontend which was not the intention.

The thrust was overall job prospects for people in software field. It is not that frontend is easy but it is definitely easy to get into. Considering there are far more frontend developers then say C++ system engineers or database designers so in sheer numbers they will be affect more.

hjort-e 5 minutes ago | parent [-]

Ah okay that's fair. In my country boot camps aren't a thing so frontend devs are rare and good frontend devs even more, so I think it depends on where in the world you are. We got an abundance of java devs here that i fear more for

nitwit005 32 minutes ago | parent | prev | next [-]

> The AI is coming for that too.

Yes, but if/when that happens, it won't just affect software engineers. An AI that can do that can replace any white collar worker.

> People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away.

I'm not sure anyone is actually working on those. People talk about spending all day writing CRUD apps here, but if you suggest there are already low code tools to build those, they will promptly tell you it's too complex for that to work.

laughing_man 16 minutes ago | parent [-]

>Yes, but if/when that happens, it won't just affect software engineers. An AI that can do that can replace any white collar worker.

Yes. Yes, that's exactly what we're going to see, and more swiftly than people are generally comfortable with. What are we going to do with all those cubicle dwellers?

dmazzoni 3 hours ago | parent | prev | next [-]

There are periods of time where I might spend 80% of my time "coding", meaning I have minimal meetings and other responsibilities.

However, even out of that 80% of my time, what fraction is actually spent "writing code"?

AI can be an enormous accelerator for the time I'd normally spend writing lines of code by hand, but it doesn't really help with the rest of the work:

- Understanding the problem - Waiting for the build system and tests to run - Manually testing the app to make sure it behaves as I'd like - Reviewing the diff to make sure it's clear - Uploading the PR and writing a description - Responding to reviewer feedback

There are times when AI can do the "write the code" portion 10x faster than I could, but if it's production code that actually matters, by the time I actually review the code, I doubt it's more than 2x.

coldtea 2 hours ago | parent [-]

>AI can be an enormous accelerator for the time I'd normally spend writing lines of code by hand, but it doesn't really help with the rest of the work:

- Understanding the problem - Waiting for the build system and tests to run - Manually testing the app to make sure it behaves as I'd like - Reviewing the diff to make sure it's clear - Uploading the PR and writing a description - Responding to reviewer feedback

What part of those you think it doesn't help with?

malfist an hour ago | parent [-]

There is no shortcut to understanding. No one can understand things for you

LPisGood 3 hours ago | parent | prev | next [-]

> The AI is coming for that too.

That may be true I’m not gonna say one way or the other, but if AI comes for that then almost all knowledge work is effectively dead, so all that’s left would be sales or physical labor.

ge96 2 hours ago | parent | next [-]

I wonder though, can AI make the next JS framework. I mean that in sincerity, there was the leap from jQuery to React for ex. If an AI only knows jQuery and no one makes React, will React come out of AI.

wiseowise 6 minutes ago | parent | next [-]

It can't. Framework hierarchy is largely based on social structure, rather than pure technical merit. Otherwise React would've been displaced long time ago.

scj an hour ago | parent | prev | next [-]

A thought experiment: When all practical software is only written by AIs, will the AIs use goto? What will the programming language of AIs look like?

My bet is something _like_ assembly, but not assembly.

That being said, I think humans will still program for fun. Just like we paint portraiture in a world with cameras.

ge96 40 minutes ago | parent [-]

Yeah that's my thing for my hardware projects, I'm not going to reach for an LLM to do it, I want to write the code myself/be present. For something new I would consider using LLM to generate something, like a computer vision implementation or something I don't already know. The end result I would know how it works, just for POC.

ASalazarMX 2 hours ago | parent | prev | next [-]

News: "AGI refuses to make another JS framework, rages on the follies of misguided developers and their wateful JS crutches"

Developer community: Wow, we truly have become obsolete now!

notpachet 32 minutes ago | parent | next [-]

In a shocking twist, it turns out that Mootools is the agents' preferred framework

ge96 2 hours ago | parent | prev [-]

Who will be the disrupters when there is nothing to disrupt

smrq 2 hours ago | parent | prev [-]

People didn't leap from jQuery to React. It's a lot easier to imagine an AI looking at jQuery and [insert any server side MVC framework] and inventing Backbone.

BurningFrog 2 hours ago | parent | prev [-]

The history of the last 250 years is inventing new professions as old ones are automated away.

I expect that to continue.

coldtea 2 hours ago | parent | next [-]

The history of the last 250 was moving from agriculture to industrial work to service work. Now the last frontier is starting to be overtaken by automation too.

(And in all of those transitions millions where left behind without work or with very worse prospects. The people that took the new jobs were often a different group, not people who knew the old jobs and were already in their 30s and 40s).

And what would be the new professions that uniquely require humans, when even thinking and creative jobs are eaten by AI? Would there be a boom of demand for dancers and chefs, especially as millions lose their service jobs?

charlie90 7 minutes ago | parent | prev | next [-]

Like doordashing and pokemon card reselling.

wiseowise 5 minutes ago | parent [-]

Don't forget OnlyFans and streaming.

nitwit005 30 minutes ago | parent | prev | next [-]

Given some sort of machine with human capabilities, there would be no reason to assign that profession to a human, excepting perhaps cost.

georgemcbay an hour ago | parent | prev [-]

> The history of the last 250 years is inventing new professions as old ones are automated away.

Even if this still holds true ("past performance is no guarantee of future results") the part about it that people handwave away without thinking about or addressing is how awful the transitional period can be.

The industrial revolution worked out well for the human labor force in the long term, but there were multiple generations of people who suffered through a horrendous transition (one that was only alleviated by the rise of a strong labor movement that may not be replicable in the age of AI, given how it is likely to shift the leverage of labor vs. capital).

If you want to lean on history as an indication that massive sudden productivity changes will make things better for humanity in the long run, then fine, but then you have to acknowledge that (based on that same history) the transition could still be absolutely chaotic and awful for the lifespan of anyone who is currently alive.

tjwebbnorfolk 2 hours ago | parent | prev | next [-]

>> I understand things and then apply my ability to formulate solutions

> The AI is coming for that too.

If this is true, then you'd have to conclude that AI is coming for everything. I'm still not convinced by that. But I am convinced that the part of software development that involves typing code manually into an IDE all day is likely gone forever.

itsafarqueue 2 hours ago | parent | next [-]

> If this is true, then you'd have to conclude that AI is coming for everything.

Now you’re getting it

flatline an hour ago | parent | prev [-]

It really doesn't have to come for everything to feel like it's taking everything. If it eliminates 10% of white collar jobs over the next decade, the impact will be felt everywhere.

wiseowise 8 minutes ago | parent | prev | next [-]

I swear to God, it's like majority of IT workers are spineless worms without an iota of self-respect. What are the reasons for this? Is it because most are nerds who were bullied in school, or some form weird elitism-inferiority complex?

Only in tech I see people hating and not taking any pride in their work. Ask a lawyer or a doctor and they'll fight you to the death if you threaten their status. But here it is not only acceptable, but also for some reason encouraged to feel inferior. You're earning shitton of money? Well, you're just a lucky insect who taps on a keyboard while X does real job, about time AI has put you on the street. Disgusting. If some AI reckoning does come, I hope it will hit you first.

PunchyHamster 3 hours ago | parent | prev | next [-]

> The AI is coming for that too.

Current AI tech giants prove over and over and over again that this is not the case

cromka 2 hours ago | parent [-]

We've literally just started, what "over and over" do you refer to?

malfist an hour ago | parent | next [-]

I've been told the past four years that AI is coming for my job. And thats just not true. Its no closer to that than it was 4 years ago.

laughing_man a minute ago | parent | next [-]

I'm not sure how anyone would know if it's closer or not. There's been a lot of progress in LLMs over the last four years.

KronisLV 21 minutes ago | parent | prev | next [-]

> Its no closer to that than it was 4 years ago.

There are people and companies out there releasing entire vibe coded projects and for some upwards of 80% of the code they develop is AI-assisted/generated. Since around the end of 2025 and models like Opus 4.6, the SOTA has gotten good enough to work agentically on all sorts of dev tasks with pretty good degrees of success (harnesses and how you use them still matters, ofc).

wiseowise 4 minutes ago | parent [-]

> There are people and companies out there releasing entire vibe coded projects and for some upwards of 80% of the code they develop is AI-assisted/generated.

And how much revenue do they generate?

Danox 26 minutes ago | parent | prev | next [-]

It is the lament of every generation of humans to think that they are the pinnacle of everything that has come before, we are just at the start of the so-called AI era, many very smart people coming up still haven’t really got their hands on all of the material available from a hardware and software standpoint. We are still at the early stages.

I am very optimistic. I just wish I was younger to take advantage like Junior high, high school age with my current resources damn… The oldest lament in the books.

kakacik 27 minutes ago | parent | prev [-]

It feels its just around the corner. But when you turn 20th corner and its still behind the next one, maybe things are a bit different than they seem / clueless emotions make us believe.

Long term its bleak, but short/medium term - not so much, if I get fired it won't be llm replacing me but rather company politics, budget changes etc. Which was the only real (very real) risk for past 15 years too, consistently. But it helps to not work for US company.

hansmayer an hour ago | parent | prev | next [-]

> We've literally just started

5+ years in the software world is like 30 years in others...So...given lacking use-cases and humongous amounts of capital already wasted on chatbots...It's more like "we" are closer to closing curtains than to "just started"...

ASalazarMX 2 hours ago | parent | prev | next [-]

Hype cycles, AI has made developers obsolete like a dozen times in the las couple of years, at least according to their developers.

luckystarr 2 hours ago | parent | prev [-]

Discovery of the best solution in a problem space is not generative but only verificative. Meaning: the LLM can see if a solution is better than another, but it can't generate the best one from the start. If you trust it, you'll get sub-par solutions.

This is definitely an agent problem instead of an LLM problem. Anybody got something explorative like this working?

coldtea 2 hours ago | parent [-]

So? Hundreds of millions of office and devel jobs are about for developing "optimal solutions" to begin with.

no_op an hour ago | parent | prev | next [-]

Even if AI advances continue, for quite a while there's likely still going to be the 'Steve Jobs' role. That is, even if AI coding agents can, in the future, replace entire teams of SWEs, competently making all implementation decisions with no guidance from a tech-savvy human, the best software will likely still involve a human deciding what should be built and being very picky about how, exactly, it should externally behave.

I don't know if it makes sense to call that person an SWE, and some people currently employed as SWEs either won't be good at this or aren't interested in doing it. But the existing pool of SWEs is probably the largest concentration of people who'll end up doing this job, because it's the largest concentration of people who've thought a lot about, and developed taste with respect to, how software should work.

bmiedlar 8 minutes ago | parent [-]

This matches what I'm seeing. I've been building software for a long time, but building more now with AI than I ever could with a traditional team. But the throughput that's helpful is from knowing what to build and what tradeoffs matter. The AI doesn't have that. It's a force multiplier on experience, not a replacement for it.

Aperocky 2 hours ago | parent | prev | next [-]

> The AI is coming for that too.

That's where we fundamentally disagree about.

Yes, AI is coming for solution formulation, absolutely, but not all of it, because it is actually a statistical machine with context limit.

Until the day LLMs are not statistical machine with a context limit, this will hold. Someone need to make something that has intent and purpose, and evidently now not by adding another 10T to the LLM parameter count.

bel8 2 hours ago | parent | next [-]

> because it is actually a statistical machine with context limit.

So are humans.

Machines have surpassed humans by magnitudes in many capabilities already (how many billion multiplications can you do per second?)

And I argue that current LLMs have surpassed many of my capabilities already.

For example GPT/Opus can understand and document some ancient legacy project I never saw before in minutes. I would take a week+ to do the same and my report would probably have more mistakes and oversights than the one generated by the LLM.

KalMann 23 minutes ago | parent | next [-]

> So are humans.

AI advocates are _way_ too confident about the nature human cognition. Questions that have been debated by philosophers and cognitive scientists for decades are now "obvious" according to you people, though you never provide any argument to support your statements.

Aperocky an hour ago | parent | prev [-]

We are not pre-trained using the summary of all human knowledge over all of history. Yet we make certain decisions with much more ease.

We are much more limited, but we fundamentally work differently. Hence adding more parameter like certain companies are doing isn't necessarily going to help. We need to rethink how LLM work, or how it work in tandem with something that's completely different.

I think it's doable, I just don't believe it's LLM, and I don't think anyone now knows what it is.

bel8 an hour ago | parent [-]

> We are not pre-trained using the summary of all human knowledge over all of history.

But we are? That's our education system.

The only reason school doesn't try to shove more information in our brains is because we hit bandwidth limits.

KalMann 20 minutes ago | parent [-]

> But we are? That's our education system.

That is not what the education system does. That's an obvious distortion of reality. People train over billions of documents to statistically predict the next word to gain and understanding of language. LLMs do this statistical processing in order to mimic humans natural language learning ability. And there has been continued evidence of the limitations of this approach to accurately mimic the totality of human cognition.

itsafarqueue an hour ago | parent | prev | next [-]

Yours is a “God of the gaps” argument. You will remain technically correct (the best kind of correct!) long after the statistical machine has subsumed your practical argument, context limit and all.

Aperocky an hour ago | parent [-]

I fall into the "pessimistic heavy user" camp, I burn thousands of $ worth of SOTA tokens monthly but it just makes me more acutely aware of the limitation and amount of work I need to do to work around them and what kind of decision that I should reserve to myself instead of trusting the LLMs to do.

coldtea 2 hours ago | parent | prev [-]

>but not all of it, because it is actually a statistical machine with context limit.

And the human mind is not?

KalMann 15 minutes ago | parent | next [-]

I can give you the exact mathematical formula used to statistically optimize the output of a neural network from input examples. Can you do the same for the brain?

nothinkjustai 2 hours ago | parent | prev [-]

It’s not.

bborud 3 hours ago | parent | prev | next [-]

> The AI is coming for that too.

To some degree yes, in practice, not so much. In practice you have to be in the world, talk to people, know how to talk to people, know how to listen, and be able to understand the difference between what people say and what they actually need. Not want. Need.

This was something I learnt in my very first job in the 1980s. I worked for someone who did industrial automation beyond PLCs and suchlike. He spent 6 months working in the company. On the factory floor, in the logistics department, in procurement, in accounting and even shadowing the board. Then he delivered a proposal for how to restructure the parts of the company, change manufacturing processes, and show how logistics and procurement could be optimized if you saw them as two parts in a bigger dance.

He redesigned the company so that it could a) be automated, and b) leverage automation to increase the efficiency of several parts of the business. THEN, he started planning how to write the software (this was the 80s after all), and then we started implementing it.

Now think about what went into this. For instance we changed a lot of what happened on the factory floor. Because my boss had actually worked it. So he knew what pain points existed. Pain points even the factory workers didn't know how to address because they didn't know that they could be addressed.

I was naive. I thought this was how everyone approached "software projects". People generally don't. But it did teach me that the job isn't writing code. It is reasoning about complex systems that often are not even known to those who are parts of it.

And this is for _boring_ software that requires very little creativity and mostly zero novelty. Now imagine how you do novel things.

> People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away. Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.

You make it sound like it is a bad thing that certain tasks become easier.

I spent a lot of time writing CRUD stuff. Because the things i really want to work on depend on them. I don't enjoy what is essentially boilerplate. Who does? If you can do the same job in 1/20 the time, then how is this a bad thing?

It is only a bad thing if writing CRUD webapps is the limit of your ability. We don't argue for banning excavators because it puts people with shovels out of work. We find more meaningful things for them to do and become more productive. New classes of work becomes low-skilled jobs.

If you have been doing software for a while, you are probably doing some subset of this. But these things are hard to articulate. It is hard to articulate because it is not something we think about. Like walking: easy for us to do, hard to program a robot to do it.

coldtea 2 hours ago | parent | next [-]

>To some degree yes, in practice, not so much. In practice you have to be in the world, talk to people, know how to talk to people, know how to listen, and be able to understand the difference between what people say and what they actually need. Not want. Need.

1 person needs to do that. The other 100 not doing that currently to begin with, but doing the AI-automatable work?

SoftTalker 3 hours ago | parent | prev [-]

> To some degree yes, in practice, not so much.

We used to say that (not long ago, even) about the code-writing part. Why do we believe that LLMs are going to stop there? Why do we think they won't soon be able to talk to people, listen, and determine what they need? I think it's mostly a cope.

We have robots walking just fine now, by the way.

sarchertech 3 hours ago | parent | next [-]

If they can do those things they can effectively replace any white collar job. That’s about 45% of the workforce. Societies tend to collapse around 25-30% unemployment.

Imagine 45% of higher than average paying jobs gone.

If that happens we’ll either figure out a new economic system, or society will collapse.

Also saying robots are walking just fine is misleading for any definition of just fine that is anywhere near as good as a human.

ryandrake 2 hours ago | parent | next [-]

Look at how the billionaires are talking about AI: Their clear, unambiguous goal is basically to replace all white collar "knowledge" jobs. And there's currently nothing regulatory that's stopping them--they just need to wait for the state of the art to improve. Once AI is "good enough" if it ever is, they won't even think twice about 45% unemployment. What are we unemployed workers going to do about it? There's no effective labor organization left. Workers have basically no political power or seat at the table. We're not going to get violent--the police/military are already owned by the billionaire class. We're just going to eventually become economically irrelevant and die off.

geodel 2 hours ago | parent | next [-]

> We're just going to eventually become economically irrelevant and die off.

As harsh it may sound, it seems rather likely to me. It is not like s/w engineers have helped struggling workers in other sectors other than sanctimonious "Learn to code" advice. So software folks can't expect any solidarity or help from others.

kiba 2 hours ago | parent | prev | next [-]

The fundamental issue isn't unemployment due to automation, but the fact that society cannot benefit from unemployment.

It should be something for us to celebrate, because it means greater freedom for humans to pursue something else rather than spending time doing drudgery.

shinryuu an hour ago | parent [-]

Put it another way, the issue is that resources are not shared more equitably. This is especially egregious considering that LLMs are trained on all human knowledge. We've all been contributing to this enterprise, and what we may end up getting in return is unemployment.

monknomo 2 hours ago | parent | prev | next [-]

45% of folks sitting on their hands are going to have the free time to talk, and this group of people are skilled at organization. Are you planning on throwing your hands up and passively accepting whatever comes your way?

rootusrootus an hour ago | parent [-]

And at least in the US they have >45% of all the small arms weaponry. There is no bunker strong enough nor private army big enough if 100M people come for you.

ryandrake 39 minutes ago | parent [-]

They're probably be betting that the technology they will need to defend their bunkers, think autonomous kill-bots or whatever, will emerge before people start to riot.

Or they're planning to build an Elysium-like colony in the ocean or space, to keep the billionaire class far from danger.

rootusrootus an hour ago | parent | prev | next [-]

I get that it is popular to hate billionaires these days, but realistically, they did not get to be billionaires by being stupid. It runs directly counter to their own interests to induce anything like 45% unemployment. They will get poorer, the world they live in right along with the rest of us will get noticeably shittier, etc.

More likely they figure out what to do with a bunch of idle talent. Or the coming generation of trillionaires will.

2 hours ago | parent | prev [-]
[deleted]
BurningFrog 2 hours ago | parent | prev [-]

It's important (and calming) to understand that since the Industrial Revolution started ~250 years ago, we've automated away most jobs several times over, while employment levels have stayed pretty constant.

"Automating half the jobs" is the same as "double productivity per worker".

When the doubling happens in 5 years rather than 50, it might be more disruptive, but I'm convinced we're on the verge of huge improvements in human standard of living!

wartywhoa23 an hour ago | parent [-]

What in the current state of world affairs outside of IT do you think is indicative of that potential for huge improvements in human standard of living?

bborud 3 hours ago | parent | prev | next [-]

We never noticed how easy the code writing part had already become because it happened slowly. Through mechanical means, through the ability to re-use code, and through code generation.

Heck, even long before LLMs about 10% to 30% of my code was already automatically generated. By tooling, by IDLs and by my editor just being able to infer what my most likely input would be.

> We have robots walking just fine now, by the way.

I don't think you got the point I was trying to make.

SoftTalker 2 hours ago | parent [-]

True, but I guess I see a distinction between scaffolded/templated boilerplate or autocomplete and actual application logic. People have generated boilerplate from templates for ages, as you say. RoR maybe a pretty good example, but there wasn't even early-days AI involved in doing that.

phkahler 2 hours ago | parent | prev | next [-]

>> We used to say that (not long ago, even) about the code-writing part. Why do we believe that LLMs are going to stop there? Why do we think they won't soon be able to talk to people, listen, and determine what they need?

Because they are currently "generative AI" meaning... autocomplete. They generate stuff but fall down at thinking and problem solving. There is talk of "reasoning models" but I think that's just clever meta-programming with LLMs. I can't say AI won't take that next step, but I think it will take another breakthrough on the order of transformers or attention. Companies are currently too busy exploiting the local maxima of LLMs.

rootusrootus an hour ago | parent [-]

> Companies are currently too busy exploiting the local maxima of LLMs

I get the feeling we can already spot the next AI Winter. Which is okay, we need a breather, and the current technology is useful enough on its own.

terseus 3 hours ago | parent | prev [-]

> Why do we believe that LLMs are going to stop there?

Why do you believe they wont? I think it's reasonable to assume that we will hit a ceiling that current models will not be able to break.

> We have robots walking just fine now, by the way.

Walking and reasoning are unrelated abilities.

SoftTalker 3 hours ago | parent [-]

Walking was given as an example of "hard to program a robot to do it" by GP. Well, now we have robots that can walk.

What evidence is there that LLMs have hit a ceiling at being able to do things like talk to users or stakeholders to elicit requirements? Using LLMs to help with design and architecture decisions is already a pretty common example that people give.

vga1 2 hours ago | parent | prev | next [-]

>bosses

The AI is coming for those too.

snozolli 39 minutes ago | parent [-]

Something like five to ten years ago, when AI hype was starting to hit media, one of the claims was that AI would come for middle-management first. Since middle-management can generally be described as collecting information from underlings and reporting information to upper management, their work was supposed to be easy to automate with AI. As far as I can tell, this hasn't proven to be true at all, and we software engineers proudly wrote ourselves out of work by constantly publishing our source code and discussing it openly.

at-fates-hands an hour ago | parent | prev | next [-]

>> Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.

Anecdotal evidence to support this.

I work with both dev and design teams. Upper management has already gone through several layoffs and offshoring of the two dev teams I work with. The devs they did keep were exactly what you said. The capable ones who reliably closed their Jira tickets. Never missed a deadline for building their features or components. And now? Their work has tripled and now the only help they get from management? "Start to figure out how to leverage AI, we're going to be a in hiring freeze for the next 10 months."

The double whammy of losing onshore team members and not getting any help from management to fix the problem they just created and essentially just telling them to figure out how to use AI to keep up is pretty staggering.

I would echo what one of the devs told me, "If this is the new "AI era" than you can count me right the fuck out of it."

oblio 3 hours ago | parent | prev | next [-]

>> I understand things and then apply my ability to formulate solutions

> The AI is coming for that too.

In that case all [1] non manual work is doomed, until robotics has an LLM moment.

[1] With the exception of all fields protected by politics or nepotism.

rootusrootus an hour ago | parent [-]

> all non manual work is doomed

All work in general. Knowledge workers can still do manual work, and will compete to do so when there is no option to continue what they do today.

thisisit 2 hours ago | parent | prev [-]

Lot of people don't seem to get that - It is easier to go from terrible to average but much harder to go from average to good.

I am sure AI bros are same people who were convinced consumer grade fully automated driving was going happen "by end of the year" for last 7 years.

Peanuts99 20 minutes ago | parent | next [-]

I agree with the statement and think a lot of people miss this, but I also wonder how many people probably don't care for good, they only care for 'good enough'.

lostmsu 2 hours ago | parent | prev [-]

No, I never believed in fully automated tale by Tesla, but as the LLMs improve my personal estimate for the date of human-level AGI is rapidly moving to "present". Before GPT-2 I had it somewhere in 2100, at GPT-2 I thought maybe by 2060 if we are lucky. Now I think it is 2035 or maybe even sooner.

rootusrootus an hour ago | parent [-]

I like to see the optimism, even if I don't share it. I think it's incredible hubris that humans think we are about to reinvent our own level of intelligence, just because we made a machine that talks pretty.