Remix.run Logo
ryanmcl 6 hours ago

Coming at this from the opposite end...I started coding 8 months ago with no experience, so AI assistance isn't replacing skills I had, it's the reason I have any skills at all.

But I've noticed something similar to what you describe. When Claude writes a solution for me, I understand it about 70% of the time. That other 30% used to bother me and I'd dig in. Lately I catch myself just accepting it and moving on. The velocity is addictive but you're right that something is being traded away.

The cost I've started noticing most: I'm worse at holding the full architecture of my own app in my head than I should be 8 months in. I can describe what each piece does but I couldn't rebuild it from scratch without help. Not sure the version of me who learned without AI would have that problem.

Still wouldn't trade the tradeoff = I have a live production app that wouldn't exist otherwise. But it's an honest cost worth naming.

dawnerd 5 hours ago | parent | next [-]

The biggest problem is it’ll teach you bad habits. For example, Claude and gpt love to use fallbacks. They generate code that’ll get a positive result at any cost, even if it’s horrible in efficient. If you don’t have past knowledge you might just think that’s how it is.

Now before someone says that junior devs make the same mistakes, yes, to some extent.

tao_oat 5 hours ago | parent | next [-]

And they love to do this in spite of writing "NO FALLBACKS" etc. in your AGENTS.md.

vergessenmir 5 hours ago | parent | prev | next [-]

If you don't have the experience you can't provide it with stylistic guidance, or idiomatic patterns or provide examples to direct it.

This leads to the idea that LLMs with existing languages can't really learn new idiomatic patterns.

For new engineers I think new paradigms will emerge that invalidate the need to know the current set of design patterns and idioms. Look at the resurgence of unit tests or the new interests in verification systems.

re-thc 4 hours ago | parent | prev [-]

> They generate code that’ll get a positive result at any cost, even if it’s horrible in efficient.

If only efficiency is the only problem with that. Sometimes an error state should an error. This is the equivalent of eating all exceptions and pretending all is fine. It just means nothing works.

wrs 2 hours ago | parent | prev | next [-]

Claude Code has a “learning mode” that makes it explain what it’s doing and leave “TODO(human)” placeholders in the code where you have to do part of it.

levkk 5 hours ago | parent | prev | next [-]

Models don't learn. They retrain them periodically, but junior engineers learn much faster and constantly improve. If you stop learning, you will only be as good as the model.

I've been coding (software engineering, I guess) for close to 15 years. The models skill set is a comfortable L1 (intern), pushing L2 (junior). They are getting better, but at a snail pace compared to a human learning the same thing.

cracell 4 hours ago | parent [-]

This was my biggest frustration with LLM based coding but Agent Skills have largely solved it.

While there’s a lot of room to improve them it’s a huge game changer for effectively coding harnesses.

inciampati 5 hours ago | parent | prev [-]

get the system to build a clean architecture and explain it to you. it will help it to build a better system. a huge part of working with these models for engineering is getting them to create reports. for themselves and of course for us to read and understand. the bottleneck is actually our verification capacity.