Remix.run Logo
XenophileJKO 3 days ago

I beginning to think most "advanced" programmers are just poor communicators.

It really comes mostly down to being able to concisely and eloquently define what you want done. It also is important to understand what the default tendencies and biases of the model are so you know where to lean in a little. Occasionally you need to provide reference material.

The capabilities have grown dramatically in the last 6 months.

I have an advantage because I have been building LLM powered products so I know mechanically what they are and are not good with. For example.. want it to wire up an API with 250+ endpoints with a harness? You better create (or have it create) a way to cluster and audit coverage.

Generally the failures I hear often with "advanced" programmers are things like algorithmic complexity, concurrency, etc.. and these models can do this stuff given the right motivation/context. You just need to understand what "assumptions" the model it making and know when you need to be explicit.

Actually one thing most people don't understand is they try to say "Do (A), Don't do (B)", etc. Defining granular behavior which is fundamentally a brittle way to interact with the models.

Far more effective is defining the persona and motivation for the agent. This creates the baseline behavior profile for the model in that context.

Not "don't make race conditions", more like "You value and appreciate elegant concurrent code."

tjr 3 days ago | parent | next [-]

Some of the best programmers I know are very good at writing and/or speaking and teaching. I struggle to believe that “advanced programmers” are poor communicators.

XenophileJKO 2 days ago | parent [-]

Genuine reflection question, are these excellent communicators good at using llms to write code?

My supposition was: Many programmers that say their programming domain was too advanced and llms didn't work for their kind of code are simply bad at describing concisely what is required.

tjr 2 days ago | parent [-]

Most good programmers that I know personally work, as do I, in aerospace, where LLMs have not been adopted as quickly as some other fields, so I honestly couldn’t say.

interstice 3 days ago | parent | prev | next [-]

> I beginning to think most "advanced" programmers are just poor communicators.

This is a interesting take take considering that programmers are experts in communicating what someone has asked for (however vaguely) into code.

I think you're referring to is the transition from 'write code that does X' which is very concrete to 'trick an AI into writing the code I would have written, only faster', which feels like work that's somewhere between an art form and asking a magic box to fix things over and over again until it stops being broken (in obvious ways, at least).

Understandably people that prefer engineered solutions do not like the idea of working this way very much.

XenophileJKO 2 days ago | parent [-]

When you oversee a team technically as a tech lead or an architect, you need communication skills.

1. Basing on how the engineer just responded to my comment, what is the understanding gap?

2. How do I describe what I want in a concise and intuitive way?

3. How do I tell an engineer what is important in this system and what are the constraints?

4. What assumptions will an engineer likely make that are will cause me to have to make a lot of corrections?

Etc.. this is all human to human.

These skills are all transferrable to working with an LLM.

So I guess if you are not used to technical leadership, you may not have used those skills as much.

interstice 2 days ago | parent [-]

The issue here is that LLM’s are not human and so having a human mental model of how to communicate doesn’t really work. If I communicate to my engineer to do X I know all kinds of things about them, like their coding style, strengths and weaknesses, and that they have some familiarity with the code they are working with and won’t bring the entirety of stack overflow answers to the context we are working in. LLM’s are nothing like this even when working with large amounts of context, they fail in extremely unpredictable ways from one prompt to the next. If you disagree I’d be interested in what stack or prompting you are using that avoids this.

mjr00 3 days ago | parent | prev | next [-]

> It really comes mostly down to being able to concisely and eloquently define what you want done.

We had a method for this before LLMs; it was called "Haskell".

XenophileJKO 3 days ago | parent | prev [-]

One added note. This rigidness of instruction is a real problem that the models themselves will magnify and you need to be aware of. For example if you ask a Claude family of models to write a sub-agent for you in Claude Code. 99% of the time it will define a rigid process with steps and conditions instead of creating a persona with motivations (and if you need it suggested courses of action).