Remix.run Logo
abletonlive 3 hours ago

These opinions about what is going on w/ LLM development always stop short at first order effects and fail to account for second/third order effects.

> Skill atrophy

If LLMs are so good that you no longer have use for the skill, why do we care about skill atrophy? That skill isn't that useful to most people. There are so many examples of this in human history where it was completely fine and we went on to do higher order things that were more useful.

> Even if they set out fully intending to provide the highest level of scrutiny to all generated code, they will gradually lose the ability to tell a good change from a bad one

If this (first order effect) is actually a problem then it follows that we will naturally exercise our skill of detecting good change from bad ones (second order effect) and the skill will not atrophy? (third order effect). Seems like your "problem" is self correcting?

> At its core, the only defense I’ve got for that response is… this time feels different? Not a particularly rigorous defense, I admit, but I did warn you that this was the squishiest of the issues at hand.

Well, if you knew this perhaps it was better just not to lead with it and spend so many paragraphs on it.

> Some might argue that, even if that time comes eventually, that’s no reason not to make use of the tools that are available right now. But it should come as no surprise that I disagree. Better not to become overly dependent on AI coding agents in the first place so you’ll be better situated to weather the storm (and maybe even thrive) when it comes.

Well this argument didn't turn out to be any less squishy than the first one. It's a self correcting "problem" but you disagree and we should do X because you said so. What was the point of all of this then?

> Prompt Injection

I also think this will likely always be a problem but you can pretty much point at ANY tool we use in software development. Your viewpoint would be similar to saying we should stop using libraries because there's always going to be a vulnerability when you distribute code that somewhere in the chain a bad actor can inject malicious code even if the library was created by a trusted source in the industry. We have plenty of examples of this happening in real life. So far, still squishy.

> Copyright/licensing > I’m not a lawyer! I’m a legal layperson offering my unqualified assessment of some tricky legal questions. Let’s get to it.

Sigh, this entire post is slop isn't it? Bad look for whatever "standup for me is".

edit: Standup for me is something that is made entirely irrelevant by agentic LLMs, no surprise. The irony is rich.

The author wants to be the gatekeeper of skill, quality, and how we develop while they hand feed us slop in the form of their blog posts.

palmotea 3 hours ago | parent [-]

> If LLMs are so good that you no longer have use for the skill, why do we care about skill atrophy? That skill isn't that useful to most people. There are so many examples of this in human history where it was completely fine and we went on to do higher order things that were more useful.

Because the LLMs actually aren't that good, so humans are expected to monitor them using the skills they no longer have the opportunity to develop and maintain.

The OP talked about that. Did you miss it?

> If this (first order effect) is actually a problem then it follows that we will naturally exercise our skill of detecting good change (second order effect) from bad ones and the skill will not atrophy? (third order effect).

You're ignoring the anti-human psychological factors: humans are bad at continuously monitoring for occasional errors. The tendency will be to adopt a complacent attitude, default allow. It's not a good environment for developing a skill, compared to actually actively using it.

abletonlive 3 hours ago | parent [-]

> Because the LLMs actually aren't that good, so humans are expected to monitor them using the skills they no longer have the opportunity to develop and maintain

If humans are expected to monitor them using the skill then obviously they are still practicing the skill and the skill is developed and maintained. Help me understand why it is so difficult for everybody with this opinion to take a another step into their premise?

> humans are bad at continuously monitoring for occasional errors

Let's assume this is true for sake of discussion: That's the job, pre-llm or not. Air traffic control? occasional errors. Software bugs? occasional errors. Department of homeland security? occasional threats

If it's hard and required that we handle the issue, then it's a skill that people will naturally exercise and the skill therefore won't atrophy.

If your argument was true we'd have swarms of people doing accounting by hand instead of using accounting tools because you're worried that the accountants will atrophy their ability to audit the output of the tools.

That's not how it works in the real world and we have plenty of examples of it...

But sure, if your argument simply boils down to "this time...it's different" like the author is arguing, then let's leave it at that. There's no value in discussing it further just like there was no value in the original post. It was just mindless slop to promote "standup for me" which is also something that falls under the category of: "things that are no longer relevant because of llms"