Remix.run Logo
zarzavat 13 hours ago

If you search back HN history to the beginnings of AI coding in 2021 you will find people observing that AI is bad for juniors because they can't distinguish between good and bad completions. There is no surprise, it's always been this way.

Edit interesting thread: https://news.ycombinator.com/item?id=27678424

Edit: an example of the kind of comment I was talking about: https://news.ycombinator.com/item?id=27677690

thecupisblue 9 hours ago | parent | next [-]

Pretty much, but it already starts at the prompting and context level.

Senior engineers either already know exactly where the changes need to be made and can suggest what to do. They probably know the pitfalls, have established patterns, architectures and designs in their head. Juniors on the other hand don't have that, so they go with whatever. Nowadays a lot of them also "ask ChatGPT about its opinion on architecture" when told to refactor (a real quote from real junior/mid engineers), leading to either them using whatever sloppypasta they get provided.

Senior devs earned their experience of what is good/bad through writing code, understanding how hard and annoying it is to make a change, then reworking those parts or making them better the next time. The feedback loop was impactful beacause it was based on that code and them working with that code, so they knew exactly what the annoying parts are.

Vibe-coding juniors do not know that, their conversation context knows that. Once things get buggy and changes are hard, they will fill up their context with tries/retries until it works, leading to their feedback loop being trained on prompts and coding tools, not code itself.

Even if they read the outputted code, they have no experience using it so they are not aware of the issues - i.e. something would be better being a typed state, but they don't really use it so they will not care, as they do not have to handle the edge cases, they will not understand the DX from an IDE, they will not build a full mental model of how it works, just a shallow one.

This leads to insane inefficiencies - wasting 50 prompt cycles instead of 10, not understanding cross-codebase patterns, lack of learning transfer from codebase to codebase, etc.

With a minor understanding of state modeling and architecture, an vibe-coding junior can be made 100x more efficient, but due to the vibe-coding itself, they will probably never learn state modeling and architecture, learn to refactor or properly manipulate abstractions, leading to an eternal cycle of LLM-driven sloppypasta code, trained on millions of terrible github repositories, old outdated API's and stack overflow answers.

FpUser 6 hours ago | parent | next [-]

>"they will fill up their context with tries/retries until it works"

Or until it does not. On numerous occasions I've observed LLMs get stuck in the endless loop of fix: one thing, break the other. Senior is capable of fixing it themselves and juniors may not even have a clue how the code works.

mattmanser 9 hours ago | parent | prev [-]

I was thinking about this last week.

I don't think this is necessarily a massive moat for senior programmers. I feel it's a not a massive jump to teach AI architecture patterns and good data modelling?

I feel that anthropic etc al. just haven't got to that training stage yet.

That then leaves you with the mental model problem. Yes, there then a large context problem, but again I was wondering if setting up an MCP that presented the AI a meaningful class map or something might help.

Essentially give the AI a mental model of the code. I personally find class maps useless as they tend to clash with my own mental model. But it might work with AI. The class map can obviously be built without AI, but then you might even get AI to go through the code function by function and annotate the class map with comments about any oddities of each function. The MCP server could even limit the size of the map, depending on what part of the code it's looking to change (working on the email sending, don't bother sending them the UI later).

I'm guessing someone's already tried it given some of the ridiculous .Claude folders I've seen[1] but I've seen no-one talking about whether it works or not yet in the discussions I follow.

[1] That I suspect are pointlessly over complicated and make CC worse not better

thecupisblue 7 hours ago | parent [-]

Yeah, tried similar things.

The issue is that having them learn that on it's own is currently an inaccurate process with a lot of overlooking. I recently tried doing some of the techniques that fared well on smaller repositories on a giant monorepo, and while sometimes they did yield improvements, most often things got overlooked, dependencies forgot about, testing suites confused. And it wastes a ton of compute in the end for smaller yields.

It will get better, that I am sure of, but currently the best way is to introduce it an architecture, give it some samples so it can do what it does best - follow text patterns. But people are mostly trying to one-shot things with this magical AI they heard about without any proper investment of time and mindshare into it.

While some might say "oh that wont work well in legacy repositores, we got 6 architectures here", pointing that out and adding a markdown explaining each helps a ton. And not "hey claude generate me an architecture.md" but transferring the actual knowledge you have, together with all the thorny bits into documentation, which will both improve your AI usage and your organisation.

fxj 11 hours ago | parent | prev | next [-]

Also AI cannot draw conclusions like "from A and B follows C". You really have to point its nose into the result that you want and then it finally understands. This is especially hard for juniors because they are just learning to see the big picture. For senior who already knows more or less what they want and needs only to work out the nitty gritty details this is much easier. I dont know where the claims come from that AI is PHD level. When it comes to reasoning it is more like a 5 year old.

zevon 12 hours ago | parent | prev [-]

This. Anecdotally, I had a student around 2021 who had some technical inclination and interest but no CS education and no programming experience. He got into using AI early and with the help of ChatGPT was able to contribute rather substantially to something we were developing at the time which would usually have been much too complex for a beginner. However, he also introduced quite a few security issues, did a lot of things in very roundabout ways, did not even consider some libraries/approaches that would have made his life much easier and more maintainable and his documentation was enthusiastic but often... slightly factually questionable and also quite roundabout.

It was quite interesting to have discussions with him after his code check-ins and I think the whole process was a good educational experience for everybody who was involved. It would not have worked this way without a combination of AI and experienced people involved.