| ▲ | XenophileJKO 3 days ago | ||||||||||||||||
I beginning to think most "advanced" programmers are just poor communicators. It really comes mostly down to being able to concisely and eloquently define what you want done. It also is important to understand what the default tendencies and biases of the model are so you know where to lean in a little. Occasionally you need to provide reference material. The capabilities have grown dramatically in the last 6 months. I have an advantage because I have been building LLM powered products so I know mechanically what they are and are not good with. For example.. want it to wire up an API with 250+ endpoints with a harness? You better create (or have it create) a way to cluster and audit coverage. Generally the failures I hear often with "advanced" programmers are things like algorithmic complexity, concurrency, etc.. and these models can do this stuff given the right motivation/context. You just need to understand what "assumptions" the model it making and know when you need to be explicit. Actually one thing most people don't understand is they try to say "Do (A), Don't do (B)", etc. Defining granular behavior which is fundamentally a brittle way to interact with the models. Far more effective is defining the persona and motivation for the agent. This creates the baseline behavior profile for the model in that context. Not "don't make race conditions", more like "You value and appreciate elegant concurrent code." | |||||||||||||||||
| ▲ | tjr 3 days ago | parent | next [-] | ||||||||||||||||
Some of the best programmers I know are very good at writing and/or speaking and teaching. I struggle to believe that “advanced programmers” are poor communicators. | |||||||||||||||||
| |||||||||||||||||
| ▲ | interstice 3 days ago | parent | prev | next [-] | ||||||||||||||||
> I beginning to think most "advanced" programmers are just poor communicators. This is a interesting take take considering that programmers are experts in communicating what someone has asked for (however vaguely) into code. I think you're referring to is the transition from 'write code that does X' which is very concrete to 'trick an AI into writing the code I would have written, only faster', which feels like work that's somewhere between an art form and asking a magic box to fix things over and over again until it stops being broken (in obvious ways, at least). Understandably people that prefer engineered solutions do not like the idea of working this way very much. | |||||||||||||||||
| |||||||||||||||||
| ▲ | mjr00 3 days ago | parent | prev | next [-] | ||||||||||||||||
> It really comes mostly down to being able to concisely and eloquently define what you want done. We had a method for this before LLMs; it was called "Haskell". | |||||||||||||||||
| ▲ | XenophileJKO 3 days ago | parent | prev [-] | ||||||||||||||||
One added note. This rigidness of instruction is a real problem that the models themselves will magnify and you need to be aware of. For example if you ask a Claude family of models to write a sub-agent for you in Claude Code. 99% of the time it will define a rigid process with steps and conditions instead of creating a persona with motivations (and if you need it suggested courses of action). | |||||||||||||||||