Remix.run Logo
Aperocky 8 hours ago

As an engineer, I'm never more excited about this job.

My implementation speed and bug fixing my typed code to be the bottleneck - now I just think about an implementation and it then exist - As long as I thought about the structure/input/output/testability and logic flow correctly and made sure I included all that information, it just works, nicely, with tests.

Unix philosophy works well with LLM too - you can have software that does one thing well and only one thing well, that fit in their context and do not lead to haphazard behavior.

Now my day essentially revolves around delivering/improving on delivering concentrated engineering thinking, which in my opinion is the pure part about engineer profession itself. I like it quite a lot.

hombre_fatal 8 hours ago | parent | next [-]

I mostly agree with you.

Though something I half-miss is using my own software as I build it to get a visceral feel for the abstractions so far. I've found that testability is a good enough proxy for "nice to use" since I think "nice to use" tends to mean that a subsystem is decoupled enough to cover unexpected usage patterns, and that's an incidental side-effect of testability.

One concern I have is that it's getting harder to demonstrate ability.

e.g. Github profiles were a good signal though one that nobody cared about unless the hiring person was an engineer who could evaluate it. But now that signal is even more rubbish. Even readmes and blog posts are becoming worse signals since they don't necessarily showcase your own communication skills anymore nor how you think about problems.

Aperocky 8 hours ago | parent | next [-]

Funny enough, I think github and communication are still a huge part of what I see.

Github code itself maybe irrelevant, but is the product KISS/UNIX? Or is it an demonstration of complete lack of discipline about what "feature" should be added. If you see something that have multiple weakly or completely irrelevant feature strung together, it's saying something. Additionally, AI would often create speghetti structures, and require human shepherding to ensure the structure remain sound.

Same with communication. I have AI smell, I know if something is AI slop. In my current job, docs sent with expectation for others to read always prefaced with -- this section typed 100% by aperocky -- and I dispensed with grammar and spelling checks for added authenticity. I'll then add -- following section is AI generated -- to mark the end of my personal writing.

I think that is the way to go in the future. I pass intentional thinking into AI, not the other way around. There are knowledge flowing back for sure, but only humans possess intention, at least for now.

kaashif 8 hours ago | parent | prev | next [-]

Those things are all still signals. If taken from a snapshot of the Internet pre-AI.

treyd 7 hours ago | parent [-]

People were still gaming GitHub profiles before AI, sometimes even just reuploading existing repos as their own.

the_af 7 hours ago | parent | prev [-]

> But now that signal is even more rubbish. Even readmes and blog posts are becoming worse signals since they don't necessarily showcase your own communication skills anymore nor how you think about problems.

Yup. I've spotted former coworkers who I know for a fact can barely write in their native language, let alone in English, working for AWS and writing English-language technical blog posts in full AI-ese. Full of the usual "it's not X, it's Y", full of AI-slop. Most of the text is filler, with a few tidbits of real content here and there.

I don't know before, but now blog posts have become more noise than signal.

Aperocky 6 hours ago | parent | next [-]

It's a strong signal in the negative direction, the best kind of signal really.

icedchai 4 hours ago | parent | prev [-]

The "dead Internet" theory has become more real. It's especially bad on LinkedIn. Everyone is now an "AI expert", posting generated slop and updating their profiles with AI enhanced head shots.

the_af 3 hours ago | parent [-]

> It's especially bad on LinkedIn

Agreed, but to be fair, LinkedIn was especially bad to begin with.

Even before AI-slop, LinkedIn posts were rightfully mocked. Self-congratulatory or self-pitying, full of empty platitudes and "lessons learned" and "journeys" (ended or started). There was never anything worth reading to begin with.

Now it's of course worse. I don't think I can stand reading about another self-appointed expert on LinkedIn writing about their completely unwarranted strategy and/or lessons and/or skepticism about AI.

I only go to LinkedIn for the daily puzzles!

icedchai 24 minutes ago | parent [-]

Yes, we have more "thought leaders" than ever, all acting like copy-and-pasting from a textbox is some sort of unique skill.

rootusrootus 8 hours ago | parent | prev | next [-]

> My implementation speed and bug fixing my typed code to be the bottleneck

I remember those days fondly and often wish I could return to them. These days it's not uncommon to go a couple days without writing a meaningful amount of code. The cost of becoming too senior I suppose.

simonw 7 hours ago | parent [-]

Anecdotally I've been observing a significant uptick in the amount of code being produced by my peers who are in senior engineer, leadership and engineering management positions.

They can take their 20+ years of experience and use it to build working systems in the gaps between meetings now. Previously they would have to carve out at least half a day of uninterrupted time to get something meaningful done.

rootusrootus 4 hours ago | parent [-]

> build working systems in the gaps between meetings now

Agreed, I've actually done this. Sitting in a meeting where someone was asking about what tooling we could build, what it might be capable of, what their options were. So while we were chatting I was having Claude build a working demo.

In the end it still needs to be turned into an enterprise app with all the annoying accoutrements that go with that, but for demo work it was phenomenal.

eloisant 5 hours ago | parent | prev | next [-]

I'm excited and scared at the same time.

Yes I'm much more productive than before, and I'm convinced we can't get rid of engineers altogether... But how long until my team of 5 gets replaced by a single engineer? Am I going to be the one to keep my job or one of the 4 to be let go?

Aperocky 3 hours ago | parent | next [-]

If the team does the exact same thing, not very long.

The ability to know what to build and what not to build is going to be as important as knowing how to build it. I still think engineers have an edge here. All my childhood dreams of what I should be able to do or build are coming to a reality and the only thing that is blocking me is lack of time. I want to go faster still

at-fates-hands 5 hours ago | parent | prev [-]

When I was in automation a decade ago, they keep telling us to never tell people this is going to replace them. What you tell them is it will allow their teams to finally focus on what really matters. Instead of working on all these repetitive tasks, now they can focus on the much larger issues. Everybody bought in, teams felt like the automation we were doing was really going to make their jobs easier.

It never did.

Managers realized they could trim their teams down after we were done and did in fact, layoff people by the hundreds. Doing the same work with less people was beneficial to them because now they got bigger bonuses and salary increases for adding to the bottom line of the company. Many managers who did nothing more than layoff half their team were promoted faster up the ranks.

So yes, be scared, be VERY scared and have a Plan B and a Plan C going forward. The people who created this have rose colored glasses on how its going to revolutionize business. The actual businesses owners and CEO's just a see another new way to reduce human capital in order to increase profits.

the_af 7 hours ago | parent | prev [-]

> As an engineer, I'm never more excited about this job.

How long do you think it'll take for the AI trend to mostly automate the parts of your job that still make you excited?

Everyone thinks it won't be them, it will be others that will be impacted. We all think what we do is somehow unique and cannot be automated away by AI, and that our jobs are safe for the time being.

xienze 7 hours ago | parent | next [-]

> How long do you think it'll take for the AI trend to mostly automate the parts of your job that still make you excited?

Yeah, no one ever thinks beyond "whoa, how cool, I cloned Slack in 15 minutes!"

Personally, the thing I find more depressing is turning a career that was primarily about solving interesting puzzles in elegant ways into managing a swarm of idiot savant chatbots with "OK, that looks good" or "no, do it better" commands.

Aperocky 6 hours ago | parent [-]

The problem that I'm trying to solve with agent is similar here, for instance, my comment likely made zero impression on you because I'm against both of the things that you are also against here.

9rx 3 hours ago | parent | prev | next [-]

> How long do you think it'll take for the AI trend to mostly automate the parts of your job that still make you excited?

The exciting part of the job is, and always has been, listening to idle chitchat where you pick up on the subtle cues of where one is finding difficulty in their life and then solving those problems. I think AI could already largely handle that today just fine, except:

You have to convince, especially non-technical, people to have idle chitchat with machines instead of humans

-or-

Convince them of and into having a machine always listening in to their idle conversations with humans

Neither of those are all that palatable in the current social landscape. If anything, people seem to be growing more weary of letting technology into their thoughts. Maybe there is never a future where humans become accepting of machines being always there trying to figure out what is wrong with them.

The trouble with AI replacing jobs is that a lot of jobs exist only because people want to have other people to talk to and are willing to pay for the company.

Aperocky 6 hours ago | parent | prev [-]

As someone in 99th percentile in terms of token usage, it's super clear to me where the agent will not be able to replace my judgement, two areas:

1. if it exceed the context the agent does random stuff, that are often against simplicity and coherent logical structure.

2. LLM has zero intention, and rely on you to decide what to build and more importantly not build.

As such, I'm the limit of the numbers of concurrent agents working fo rme, because there is still a limit to my output of engineering judgement. I do get better, both at generating and delivering this judgement. Exceeding this limit, the output becomes garbage.

At this current year and date, the AI does not automate me in anyway, I have something that they just flat out don't have.

the_af 6 hours ago | parent [-]

Playing devil's advocate here, I'm not antagonizing you but thinking out loud.

> if it exceed the context the agent does random stuff, that are often against simplicity and coherent logical structure.

That's a current technical limitation. Are you so sure it won't be overcome in the near/mid future?

> LLM has zero intention, and rely on you to decide what to build and more importantly not build

But work is being done to even remove or automate this layer, right? It can be hyperbole (in fact, it is) but aren't Anthropic et al predicting this? Why wouldn't your boss, or your boss' boss, do this instead of you? If they lack the judgment currently, are you so sure they cannot gain it, if they don't have to waste time learning how to code? If not now, what about soon-ish?

> At this current year and date, the AI does not automate me in anyway

Not now, granted. But what about soon? In other words, shouldn't you be worried as well as excited?

Aperocky 4 hours ago | parent | next [-]

Well if you do nothing you should definitely be worried, because not using LLM is rapidly becoming untenable.

If you do a lot, you'll grow skeptical about some of the claims and hype, and have a sense of where this is leading to.

My position is that if someone use LLM a lot, they maybe right or wrong about the future of LLM. If they don't, then they definitely are not right or are only lucky.

My personal judgement is both of these are hard caps until they invented something that's not a transformer, start from scratch bascially.

the_af 3 hours ago | parent [-]

> because not using LLM is rapidly becoming untenable

Completely agreed. This is not what I'm advocating for. And definitely, there's a lot of self-serving hype (and fearmongering can be another kind of hype) by AI companies. But some of it I think will be true, or enough companies will believe it to be true, which amounts to the same.

I'm just worried, I cannot help it. And I'm not saying "don't use AI", I'm pushing back about the feeling of reckless "excitement".

ChrisLTD 4 hours ago | parent | prev [-]

Does it seem to you like those issues will be solved soon? Does your boss have the time to do this AI wrangling work on top of their other tasks even if they don't have to learn to code?

Aperocky 3 hours ago | parent | next [-]

If they never learned to code, it wouldn't be very easy to build or catch the BS that AI generate.

ChrisLTD 12 minutes ago | parent [-]

Yes, this is the obvious problem.

We've been through cycles like this before. Back in the day, Dreamweaver was going to put every web developer out of a job. More recently, Squarespace was going to do something similar. However, as soon as you step out of the well-trodden path, you're encountering tougher to debug issue, or you want some customization that the tools aren't aware of or designed to handle, and now you're hiring or paying a specialist again.

the_af 3 hours ago | parent | prev [-]

> Does it seem to you like those issues will be solved soon?

No.

But I was also very skeptical about AI being able to code semi-reliably during the early stages of GPT hype, and look where I'm now: most of the code I produce is written by an AI. So I was wrong before, which makes me doubt my own ability to predict the near future.

> Does your boss have the time to do this AI wrangling work on top of their other tasks even if they don't have to learn to code?

My boss' boss would probably love to get rid of both me and my direct boss. And a whole class of problems will disappear, freeing time of people higher up the chain to focus on this... either them or a tiny group of engineers, which leaves me out of a job either way. I've already seen people in small shops get fired because their immediate semi-technical boss can now do their job with AI (cannot go details because of privacy reasons. Also, it doesn't matter if the end result is flawed, it matters that "mission accomplished" and someone is out of a job).

ChrisLTD 16 minutes ago | parent [-]

> But I was also very skeptical about AI being able to code semi-reliably during the early stages of GPT hype, and look where I'm now: most of the code I produce is written by an AI

My impression from a couple years ago was that it was fairly decent at coding, it was just slow to go from question -> code, and the tooling around that has improved significantly so that it's all pretty quick. I think whether or not the models are fundamentally better at raw coding is a murkier question.

They still fall down at bigger architectural tasks, go off the rails, hallucinate, etc. So, it seems to me like a core problem with the current technology.

> it doesn't matter if the end result is flawed, it matters that "mission accomplished" and someone is out of a job

This is a short term problem. If the market has any sanity left, the shops that maintain the talent to execute well will out-perform the shops that were short-sighted.