Remix.run Logo
xgbi 5 hours ago

Rant mode on.

For the second time of the week this morning, I spent 45 min reviewing a merge request where the guy has no idea what he did, didn’t test, and let the llm hallucinate a very bad solution to a simple problem.

He just had to read the previous commit, which introduced the bug, and think about it for 1min.

We are creating young people that have a very limited attention span, have no incentive to think about things, and have very pleasing metrics on the dora scale. When asked what their code is doing, they just don’t know. They can’t event explain the choices they made.

Honestly I think AI is just a very very sharp knife. We’re going to regret this just like regretting the mass offshoring in the 2000s.

rhubarbtree 5 hours ago | parent | next [-]

Yes, we created them with social media. Lots of people on this site did that by working for the social media companies.

AI usage like that is a symptom not the problem.

signatoremo an hour ago | parent | prev | next [-]

Your rant is misplaced. It should be placed on hiring — candidates screening, on training — getting junior developers ready for their job, on engineering - code review and testing, and so on.

If anything, AI helps expose shortcomings of companies. The strong ones will fix them. The weak ones will languish.

JLO64 5 hours ago | parent | prev | next [-]

I'm not surprised to see reports like this for open source projects where the bar for contributing is relatively low, but am surprised to see it in the workplace. You'd imagine that devs like that would be filtered out via the hiring process...

I'm a coding tutor and the most frustrating part of my job is when my students use LLM generated code. They have no clue what the code does (or even what libraries they're using) and just care about the pretty output. Whenever I try asking them questions about the code one of them responded verbatim "I dunno" and continued prompting ChatGPT (I ditched that student afterward). Something like Warp where the expectation is to not even interact with the terminal is equally bad as far as I'm concerned since students won't have any incentive to understand what's under the hood of their GUIs.

To be clear, I don't mind people using LLMs to code (I use them to code my SaaS project) but what I do mind is them not even trying to understand wtf is on their screen. This new breed of vibe coders are going to be close to useless in real world programming jobs which when combined with the push targeted at kids that "coding is the future" is going to result in a bunch of below mediocre devs both flooding the market and struggling to find employment.

saulpw 25 minutes ago | parent | next [-]

> You'd imagine that devs like that would be filtered out via the hiring process...

...except when the C-suite is pressuring the entire org to use AI tools. Then these people are blessed as the next generation of coders.

xgbi 2 hours ago | parent | prev [-]

Same, I use LLMs to figure out the correct options to pass in the AZ or the AWS CLI, or some low-key things. I still code on my own.

But our management has drunk the Kool Aid and has now everybody obliged to use Copilot or other LLM assists.

driverdan 4 hours ago | parent | prev | next [-]

> We are creating young people that have a very limited attention span

This isn't about age. I'm in my 40's and my attention span seems to have gotten worse. I don't use much social media anymore either. I see it in other people too regardless of age.

saulpw 23 minutes ago | parent [-]

Same. What do you think it's about? Future shock? Smartphone use (separate from social media)? Singularity overwhelm? Long Covid?

Archelaos 5 hours ago | parent | prev | next [-]

Why did you I spent 45 min reviewing instead of outright rejecting it? (Honest question.)

xgbi 2 hours ago | parent | next [-]

Cause the codebase wasn't in my scope originally and I had to review in emergency due to a regression in production. I took the time to understand the issue at hand and why the code had to change.

To be clear, the guy moved back a Docker image from being non-root (user 1000), to reusing a root user and `exec su` into the user after doing some root things in the entrypoint. The only issue is that when looking at the previous commit, you could see that the K8S deployment using this image wrongly changed the userId to be 1000 instead of 1001.

But since the coding guy didn't take the time to take a cursory look at why working things started to not work, he asked the LLM "I need to change the owner of some files so that they are 1001" and the LLM happily obliged by using the most convoluted way (about 100 lines of code change).

The actual fix I suggested was:

    securityContext:
  -    runAsUser: 1000
  +    runAsUser: 1001
GuardianCaveman 4 hours ago | parent | prev [-]

He didn’t read it first either apparently

dawnerd 5 hours ago | parent | prev | next [-]

I've just started immediately rejecting AI pull requests. I don't have time for that.

There's going to be a massive opportunity for agencies that are skilled enough to come in and fix all of this nonsense when companies realize what they've invested in.

kemayo 5 hours ago | parent [-]

Almost worse is AI bug reports. I've gotten a few of them on GitHub projects, where someone clearly pasted an error message into ChatGPT and asked it to write a bug report... and they're incoherent.

fluoridation 5 hours ago | parent [-]

Some are using them to hunt bug bounties too. The CURL developer has complained about dealing with a deluge of bullshit reports that contain no substance. I watched a video the other day that demonstrated an example of a report of a buffer overflow. TL;DR: Code was generated by some means that included the libcurl header and called strlen() on a buffer with no null terminator, and that's all it did. It triggered ASAN and a report was generated from that, talking about how a remote website could overflow a buffer in the client's cookies using a crafted response. Mind you, the code didn't even call into libcurl once.

akomtu 3 hours ago | parent | prev | next [-]

When neuralink becomes usable, the same hordes of people will rush to install the AI plugin so it can relieve their brains from putting in any effort. The rest will be given a difficult choice: do the same or become unemployable in the new AI economy.

bluefirebrand an hour ago | parent [-]

I can't wait until people are writing malware that targets neuralink users with brain death

Cyberpunk future here we come baby

tmaly 3 hours ago | parent | prev | next [-]

there is a temptation to fight AI slop with AI slop

SamuelAdams 5 hours ago | parent | prev [-]

> We are creating young people that have a very limited attention span, have no incentive to think about things, and have very pleasing metrics on the dora scale. When asked what their code is doing, they just don’t know. They can’t event explain the choices they made.

This has nothing to do with AI, and everything to do with a bad hire. If the developer is that bad with code, how did they get hired in the first place? If AI is making them lazier, and they refuse to improve, maybe they ought to be replaced by a better developer?