Remix.run Logo
the_af 5 hours ago

> As an engineer, I'm never more excited about this job.

How long do you think it'll take for the AI trend to mostly automate the parts of your job that still make you excited?

Everyone thinks it won't be them, it will be others that will be impacted. We all think what we do is somehow unique and cannot be automated away by AI, and that our jobs are safe for the time being.

9rx an hour ago | parent | next [-]

> How long do you think it'll take for the AI trend to mostly automate the parts of your job that still make you excited?

The exciting part of the job is, and always has been, listening to idle chitchat where you pick up on the subtle cues of where one is finding difficulty in their life and then solving those problems. I think AI could already largely handle that today just fine, except:

You have to convince, especially non-technical, people to have idle chitchat with machines instead of humans

-or-

Convince them of and into having a machine always listening in to their idle conversations with humans

Neither of those are all that palatable in the current social landscape. If anything, people seem to be growing more weary of letting technology into their thoughts. Maybe there is never a future where humans become accepting of machines being always there trying to figure out what is wrong with them.

The trouble with AI replacing jobs is that a lot of jobs exist only because people want to have other people to talk to and are willing to pay for the company.

Aperocky 4 hours ago | parent | prev | next [-]

As someone in 99th percentile in terms of token usage, it's super clear to me where the agent will not be able to replace my judgement, two areas:

1. if it exceed the context the agent does random stuff, that are often against simplicity and coherent logical structure.

2. LLM has zero intention, and rely on you to decide what to build and more importantly not build.

As such, I'm the limit of the numbers of concurrent agents working fo rme, because there is still a limit to my output of engineering judgement. I do get better, both at generating and delivering this judgement. Exceeding this limit, the output becomes garbage.

At this current year and date, the AI does not automate me in anyway, I have something that they just flat out don't have.

the_af 4 hours ago | parent [-]

Playing devil's advocate here, I'm not antagonizing you but thinking out loud.

> if it exceed the context the agent does random stuff, that are often against simplicity and coherent logical structure.

That's a current technical limitation. Are you so sure it won't be overcome in the near/mid future?

> LLM has zero intention, and rely on you to decide what to build and more importantly not build

But work is being done to even remove or automate this layer, right? It can be hyperbole (in fact, it is) but aren't Anthropic et al predicting this? Why wouldn't your boss, or your boss' boss, do this instead of you? If they lack the judgment currently, are you so sure they cannot gain it, if they don't have to waste time learning how to code? If not now, what about soon-ish?

> At this current year and date, the AI does not automate me in anyway

Not now, granted. But what about soon? In other words, shouldn't you be worried as well as excited?

Aperocky 2 hours ago | parent | next [-]

Well if you do nothing you should definitely be worried, because not using LLM is rapidly becoming untenable.

If you do a lot, you'll grow skeptical about some of the claims and hype, and have a sense of where this is leading to.

My position is that if someone use LLM a lot, they maybe right or wrong about the future of LLM. If they don't, then they definitely are not right or are only lucky.

My personal judgement is both of these are hard caps until they invented something that's not a transformer, start from scratch bascially.

the_af an hour ago | parent [-]

> because not using LLM is rapidly becoming untenable

Completely agreed. This is not what I'm advocating for. And definitely, there's a lot of self-serving hype (and fearmongering can be another kind of hype) by AI companies. But some of it I think will be true, or enough companies will believe it to be true, which amounts to the same.

I'm just worried, I cannot help it. And I'm not saying "don't use AI", I'm pushing back about the feeling of reckless "excitement".

ChrisLTD 2 hours ago | parent | prev [-]

Does it seem to you like those issues will be solved soon? Does your boss have the time to do this AI wrangling work on top of their other tasks even if they don't have to learn to code?

the_af an hour ago | parent | next [-]

> Does it seem to you like those issues will be solved soon?

No.

But I was also very skeptical about AI being able to code semi-reliably during the early stages of GPT hype, and look where I'm now: most of the code I produce is written by an AI. So I was wrong before, which makes me doubt my own ability to predict the near future.

> Does your boss have the time to do this AI wrangling work on top of their other tasks even if they don't have to learn to code?

My boss' boss would probably love to get rid of both me and my direct boss. And a whole class of problems will disappear, freeing time of people higher up the chain to focus on this... either them or a tiny group of engineers, which leaves me out of a job either way. I've already seen people in small shops get fired because their immediate semi-technical boss can now do their job with AI (cannot go details because of privacy reasons. Also, it doesn't matter if the end result is flawed, it matters that "mission accomplished" and someone is out of a job).

Aperocky 2 hours ago | parent | prev [-]

If they never learned to code, it wouldn't be very easy to build or catch the BS that AI generate.

xienze 5 hours ago | parent | prev [-]

> How long do you think it'll take for the AI trend to mostly automate the parts of your job that still make you excited?

Yeah, no one ever thinks beyond "whoa, how cool, I cloned Slack in 15 minutes!"

Personally, the thing I find more depressing is turning a career that was primarily about solving interesting puzzles in elegant ways into managing a swarm of idiot savant chatbots with "OK, that looks good" or "no, do it better" commands.

Aperocky 4 hours ago | parent [-]

The problem that I'm trying to solve with agent is similar here, for instance, my comment likely made zero impression on you because I'm against both of the things that you are also against here.