Remix.run Logo
CharlieDigital 9 hours ago

This has been my observation with self-generated docs as well.

I have seen some devs pull out absolutely bad guidance by introspecting the code with the LLM to define "best practices" and docs because it introduces its own encoded biases in there. The devs are so lazy that they can't be bothered to simply type the bullet points that define "good".

One example is that we had some extracted snippet for C#/.NET that was sprinkling in `ConfigureAwait(false)` which should not be in application code and generally not needed for ASP.NET. But the coding agent saw some code that looked like "library" code and decided to apply it and then someone ran the LLM against that and pulled out "best practices" and placed them into the repo and started to pollute the rest of the context.

I caught this when I found the code in a PR and then found the source and zeroed it out. We've also had to untangle some egregious use of `Task.Run` (again, not best practice in C# and you really want to know what you're doing with it).

At the end of it, we are building a new system that is meant to compose and serve curated, best practice guidance to coding agents to get better consistency and quality. The usage of self-generated skills and knowledge seems like those experiments where people feed in an image and ask the LLM to give back the image without changing it. After n cycles, it is invariably deeply mutated from the original.

Agentic coding is the future, but people have not yet adapted. We went from punch cards to assembly to FORTRAN to C to JavaScript; each step adding more abstractions. The next abstraction is Markdown and I think that teams that invest their time in writing and curating markdown will create better guardrails within which agents can operate without sacrificing quality, security, performance, maintainability, and other non-functional aspects of software system.

wmeredith 9 hours ago | parent [-]

> Agentic coding is the future, but people have not yet adapted. We went from punch cards to assembly to FORTRAN to C to JavaScript; each step adding more abstractions.

I don't completely disagree (I've argued the same point myself). But one critical difference between the LLM layer and all of those others you listed, is that LLMs are non-deterministic and all those other layers are. I'm not sure how that changes the dynamic, but surely it does.

CharlieDigital 8 hours ago | parent [-]

The LLM can be non-deterministic, but in the end, as long as we have compilers and integration tests, isn't it the same? You go from non-deterministic human interpretation of requirements and specs into a compiled, deterministic state machine. Now you have a non-deterministic coding agent doing the same and simply replacing the typing portion of that work.

So long as you supply the agent well-curated set of guidance, it should ultimately produce more consistent code with higher quality than if the same task were given to a team of random humans of varying skill and experience levels.

The key now is how much a team invests in writing the high quality guidance in the first place.

dehsge 6 hours ago | parent | next [-]

Compilers can never be error free for non trivial statements. This is outlined in Rices theorem. It’s one of the reasons we have observability/telemetry as well as tests.

CharlieDigital 5 hours ago | parent [-]

That's fine, but this also applies to human written code and human written code will have even more variance by skill and experience.

paganel 8 hours ago | parent | prev [-]

The unspoken truth is that tests were never meant to cover all aspects of a piece of software running and doing its thing, that's where the "human mind(s)" that had actually built the system and brought it to life was supposed to come in and add the real layer of veracity. In other words, "if it walks like a duck and quacks like duck" was never enough, no matter how much duck-related testing was in place.