Remix.run Logo
rafaelmn 2 days ago

> but this was true when they wrote their own code with stack overflow.

Searching for solutions and integrating examples found requires effort that develops into a skill. You would rarely get solutions that would just fit into the codebase from SO. If I give a task to you and you produce a correct solution on the initial review I now know I can trust you to deal with this kind of problem in the future. Especially after a few reviews.

If you just vibed through the problem the LLM might have given you the correct solution - but there is no guarantee that it will do it again in the future. Just because you spent less effort on search/official docs/integration into the codebase you learned less about everything surrounding it.

So using LLMs as a junior you are just breaking my trust, and we both know you are not a competent reviewer of LLM code - why am I even dealing with you when I'll get LLM outputs faster myself ? This was my experience so far.

OvbiousError 2 days ago | parent | next [-]

> So using LLMs as a junior you are just breaking my trust, and we both know you are not a competent reviewer of LLM code - why am I even dealing with you when I'll get LLM outputs faster myself ? This was my experience so far.

So much this. I see a 1000 lines super complicated PR that was whipped up in less than a day and I know they didn't read all of it, let alone understand.

theshrike79 a day ago | parent [-]

This is exactly what code reviews are for, doing reviews with juniors is always time-consuming

rafaelmn 16 hours ago | parent [-]

Yes but your supposed to reap the benefit of learning from that junior. If there is no progress and no trust in ability acquired they are just a burden.

fhd2 2 days ago | parent | prev | next [-]

Like with any kind of learning, without a feedback loop (as tight as possible IMHO), it's not gonna happen. And there is always some kind of feedback loop.

Ultra short cycle: Pairing with a senior, solid manual and automated testing during development.

Reasonably short cycle: Code review by a senior within hours and for small subsets of the work ideally, QA testing by a seperate person within hours.

Borderline too long cycle: Code review of larger chunks of code by a senior with days of delay, QA testing by a seperate person days or weeks after implementation.

Terminally long feedback cycle: Critical bug in production, data loss, negative career consequences.

I'm confident that juniors will still learn, eventually. Seniors can help them learn a whole lot faster though, if both sides want that, and if the organisation lets them. And yeah, that's even more the case than in the pre LLM world.

DenisM 2 days ago | parent [-]

LLM can also help learning f you ask it what can be done better. Seniors can make prepromt so that company customs are taken into account.

dchftcs a day ago | parent | prev | next [-]

People obviously turning out LLM code uncritically should be investigated and depending on the findings made redundant. It's a good thing that it allows teams to filter out these people earlier. In my career I have found that a big predictor of the quality of code is the amount of thought put into it and the brains put behind it - procedures like code review can only catch so many things, and a senior's time is saved only when a junior has actually put their brain to work. If someone is shown to never put in thought in their work, they need to make way for people who actually do.

godelski a day ago | parent | prev [-]

  > you learned less about everything surrounding it.
I think one of the big acceleration points in my skills as a developer was when I moved from searching SO and other similar sources to reading the docs and reading the code. At first, this was much slower. I was usually looking for a more specific thing and didn't usually need the surrounding context. But then as I continued, that surrounding context became important. That stuff I was reading compounded and helped me see much more. These gains were completely invisible and sometimes even looked like losses. In reality, that context was always important, I just wasn't skilled enough to understand why. Those "losses" are more akin to a loss you have when you make an investment. You lost money, but gained a stock.

I mean I still use SO, medium articles, LLMs, and tons of sources. But I find myself just turning to the docs as my first choice now. At worst I get better questions to pay attention to with the other sources.

I think there's this terrible belief that's developed in CS and the LLM crowd targets. The idea that everything is simple. There's truth to this, but there's a lot of complexity to simplicity. The defining characteristic between an expert and a novice is their knowledge of nuance. The expert knows what nuances matter and what don't. Sometimes a small issue compounds and turns into a large one, sometimes it disappears. The junior can't tell the difference, but the expert can. Unfortunately, this can sound like bikeshedding and quibbling over nothings (sometimes it is). But only experts can tell the difference ¯\_(ツ)_/¯

samurai_sword 15 hours ago | parent [-]

You are absolutely right. I work as Robotics Engineer at autonomous company. I use cursor and currently using gpt-5-high for coding. When I started out coding for my project 3 years ago there was no AI coding. I had to learn how to code by reading lots of docs and reading lots & lots of code(nav2 stack). This gave me the sense of how code is architected, why it is the way it is, etc. I also try to not blindly follow any code I see but every single piece of code I critically ask lots of questions(this made me crazy, good kind). This helped me to learn extremely fast. So the point is "everyone must know when their brain is being used and when not. If your brain is not being used at anytime of a project then you are probably out of loop".

The thing about AI is when it started out(coding models) they were kinda bad. But I feel any tool that provides value to time or effort is a useful tool. I use AI now mostly to add some methods, ask questions about the code base and brainstorm ideas against that code base. There are levels on how you use this tool(AI).

1. Complete trust(if its easy task and you can verify quickly). 2. medium trust( you ask questions back to AI to critically understand why it did what it did). 3. zero trust.(this is very important for learning fast, not coding. You need to stress AI to give me lots of information, right or wrong, cross-check it manually and soak it in your brain carefully. Here you will know whether that AI is good or bad.)

Conclusion: We are human beings. Any tool must be used with caution, especially AI that is capable of playing tricks with your precious brain. Build razor sharp instincts and trust them ONLY.