▲ | noduerme 6 days ago | |||||||||||||||||||||||||||||||
Good programmers working hand in glove with good companies do much more than this. We question the business logic itself and suggest non-technical, operational solutions to user issues before we take a hammer to the code. Also, as someone else said, consider the root causes of an issue, whether those are in code logic or business ops or some intersection between the two. When I save twenty hours of a client's money and my own time, by telling them that a new software feature they want would be unnecessary if they changed the order of questions their employees ask on the phone, I've done my job well. By the same token, if I'm bored and find weird stuff in the database indicating employees tried to perform the same action twice or something, that is something that can be solved with more backstops and/or a better UI. Coding business logic is not a one-way street. Understanding the root causes and context of issues in the code itself is very hard and requires you to have a mental model of both domains. Going further and actually requesting changes to the business logic which would help clean up the code requires a flexible employer, but also an ability to think on a higher order than simply doing some CRUD tasks. The fact that I wouldn't trust any LLM to touch any of my code in those real world cases makes me think that most people who are touting them are not, in fact, writing code at the same level or doing the same job I do. Or understand it very well. | ||||||||||||||||||||||||||||||||
▲ | shinycode 6 days ago | parent | next [-] | |||||||||||||||||||||||||||||||
True and LLM have no incentive to avoid writing code. It’s even worse they are « paid » by the amount of code they generate. So default behavior is to avoid asking questions to refine the need. They thrive on blurry and imprecise prompt because in any case they’ll generate thousands of loc, regardless of the pertinence. Many people confirmed that in their experience. I’ve never seen an LLM step back, ask questions and then code or avoid coding. It’s by design a choice of generating the most stuff because of money. So right now an LLM and the developer you describe here are two very different thing and an LLM will, by design, never replace you | ||||||||||||||||||||||||||||||||
▲ | danielrico 6 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
> When I save twenty hours of a client's money and my own time, by telling them that a new software feature they want would be unnecessary if they changed the order of questions their employees ask on the phone, I've done my job well. I like to explain my work as "do whatever is needed to do as little work as possible". Being by improving logs, improving architecture, updating logs, pushing responsibilities around or rejecting some features. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | 1dom 6 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
I think this is a fair and valuable comment. Only part I think could be more nuanced is: > The fact that I wouldn't trust any LLM to touch any of my code in those real world cases makes me think that most people who are touting them are not, in fact, writing code at the same level or doing the same job I do. Or understand it very well. I agree with this specifically for agentic LLM use. However, I've personally increased my code speed and quality with LLMs for sure using purely local models as a really fancy auto complete for 1 or 2 lines at a time. The rest of your comment is good, bit the last paragraph to me reads like someone inexperienced with LLMs looking to find excuses to justify not being productive with them, when others clearly are. Sorry. | ||||||||||||||||||||||||||||||||
▲ | jlcummings 6 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
Being effective with llm agents requires not just the ability to code or to appreciate nuance with libraries or business rules but to have the ability and proclivity of pedantry. Dad-splain everything always. And to have boundless contextual awareness… dig a rabbit hole, but beware that you are in your own hole. At this point you can escape the hole but you have to be purposefully aware of what guardrails and ladders you give the agent to evoke action. The better, more explicit guardrails you provide the more likely the agent is able to do what is expected and honor the scope and context you establish. If you tell it to use silverware to eat, be assured it doesn’t mean to use it appropriately or idiomatically and it will try eating soup with a fork. Lastly don’t be afraid of commits and checkpoints, or to reject/rollback proposed changes and restate or reset the context. The agent might be the leading actor, but you are the director. When a scene doesn’t play out, try it again after clarification or changing camera perspective or lighting or lines, or cut/replace the scene entirely. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | gxs 6 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
To be honest you sound super defensive, not just in a classic programmer when someone invades on their turf sort of way, but also in the classic way people who are reluctant to accept a new technology This sentiment of, a human will always be needed, there’s no replacement for human touch, the stakes are too high, is as old as time You just said, quite literally, that people leveraging LLMs to code are not doing it at your level - that’s borders on hubris The fact of the matter is that like most tools, you get out of AI what you put into it I know a lot of engineers and this pride, this reluctance to accept the help is super common The best engineers on the other hand are leveraging this just fine, just another tool for them that speeds things up | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | danielbln 6 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||
I'm not sure what any of what you just wrote has to do with LLMs. If you use LLMs to rubber duck or write tests/code, then all of the things you mentioned should still apply. That last logical leap, the fact that _you_ wouldn't trust LLM to touch your code means that people who do aren't at the same level as you is a fallacy. |