Remix.run Logo
8note 2 days ago

its not necessarily better, but its certainly good enough, if youre already used to distributing work to different people

the scale of code doesnt really matter that much, as long as a programmer can point it at the right places.

i think actually you want to be really involved in the skeleton, since from what ive seen the agent is quite bad at making skeletons that it can do a good job extending.

if you get the base right though, the agent can make precise changes in large code bases

jdw64 2 days ago | parent | next [-]

Thinking about it, I think what is interesting about the output of agentic coding is this:

I mostly agree with the general tendency that it starts to break down as the context grows. But there is also a difference in how people evaluate it. Some people say agents are good at building the skeleton, while others say they are better at extending an existing structure.

I think this depends on the setup, and it is ultimately a trade-off.

In my case, I usually work on codebases around 60,000 LoC. The programs I deliver are generally between 60,000 and 80,000 lines of code. I think I can fairly call myself a specialist at that scale, since I have personally delivered close to 40 projects of that size.

At that scale, I felt that agentic coding was actually very good at building the initial skeleton.

I do not know what kind of work you usually do, but if your work involves highly precise, low-level tasks, then I can understand why you might feel differently.

In my case, I mostly assemble high-level libraries and frameworks into working systems, so that may be why I experience it this way.

sroussey 2 days ago | parent | next [-]

The coding agents are good at growing code.

Like a child growing up!

Also, like a cancer.

Similar process, different outcomes.

cdud3 2 days ago | parent [-]

That's why we started to force our developers to take ownership and responsibilities of what there AI ships to other developers for review. It's stunning how the amount of code decreases and the quality of the deliveries improves when developers put extra effort in to iterate on decreasing the complexity AI introduces. In lot of cases you can vibe code that too when understanding the output and guiding your AI on the path.

slopinthebag 2 days ago | parent | prev [-]

I think it's just the context in which it's working in.

1m lines of html are infinitely more conducive for a language model to work in than 10k lines of complex multithreaded low level code.

A lot of coding is just rehashing the same concepts in slightly novel ways, language models work great in this context as code gen machines.

The hope is that we can focus our efforts on harder problems, using language models as a tool to make us more productive and more powerful, and with the advancements open weight models have made, also less reliant on big tech companies to do so.

energy123 2 days ago | parent | prev [-]

I find LLMs are good at skeletons but only if you are tedious about writing down what you want before you start. Then give that text to GPT 5.5 Pro, and be prepared for a number of iterations.