Remix.run Logo
noduerme 6 days ago

Good programmers working hand in glove with good companies do much more than this. We question the business logic itself and suggest non-technical, operational solutions to user issues before we take a hammer to the code.

Also, as someone else said, consider the root causes of an issue, whether those are in code logic or business ops or some intersection between the two.

When I save twenty hours of a client's money and my own time, by telling them that a new software feature they want would be unnecessary if they changed the order of questions their employees ask on the phone, I've done my job well.

By the same token, if I'm bored and find weird stuff in the database indicating employees tried to perform the same action twice or something, that is something that can be solved with more backstops and/or a better UI.

Coding business logic is not a one-way street. Understanding the root causes and context of issues in the code itself is very hard and requires you to have a mental model of both domains. Going further and actually requesting changes to the business logic which would help clean up the code requires a flexible employer, but also an ability to think on a higher order than simply doing some CRUD tasks.

The fact that I wouldn't trust any LLM to touch any of my code in those real world cases makes me think that most people who are touting them are not, in fact, writing code at the same level or doing the same job I do. Or understand it very well.

shinycode 6 days ago | parent | next [-]

True and LLM have no incentive to avoid writing code. It’s even worse they are « paid » by the amount of code they generate. So default behavior is to avoid asking questions to refine the need. They thrive on blurry and imprecise prompt because in any case they’ll generate thousands of loc, regardless of the pertinence. Many people confirmed that in their experience. I’ve never seen an LLM step back, ask questions and then code or avoid coding. It’s by design a choice of generating the most stuff because of money.

So right now an LLM and the developer you describe here are two very different thing and an LLM will, by design, never replace you

danielrico 6 days ago | parent | prev | next [-]

> When I save twenty hours of a client's money and my own time, by telling them that a new software feature they want would be unnecessary if they changed the order of questions their employees ask on the phone, I've done my job well.

I like to explain my work as "do whatever is needed to do as little work as possible".

Being by improving logs, improving architecture, updating logs, pushing responsibilities around or rejecting some features.

withinboredom 6 days ago | parent [-]

"The best programmers are lazy, or more accurately, they work hard to be as lazy as possible." -- CS101, first day

K0balt 6 days ago | parent [-]

The most clever lines of code are the ones you don’t write. Often this is a matter of properly defining the problem in terms of data structure. LLMs are not at all good at seeing that a data structure is inside out and that by turning it right side in, we can fix half the problems.

More significantly though, OP seems right on to me. The basic functionality of LLMs is handy for a code writing assistant, but does not replace a software engineer, and is not ever likely too no matter how many janky accessories we bolt on. LLMs are fundamentally semantic pattern matching engines, and are only problem solvers in the context of problems that are either explicitly or implicitly defined and solved in their training data. They will always require supervision because there is fundamentally no difference between a useful LLM output and a “hallucination” except the utility rating that a human judge applies to the output.

LLMs are good at solving fully defined, fully solved problems. A lot of work falls into that category, but some does not.

noduerme 5 days ago | parent [-]

>> The most clever lines of code are the ones you don’t write.

Just to add, I think there are three things that LLMs don't address here, but maybe it's because they're not being asked the broader questions:

1. What are some reasonable out-of-band alternatives to coding the thing I'm being asked to code?

2. What kind of future modifications might the client want, and how can we ensure this mod will accommodate those without creating too many new constraints, but also without over-preparing for something that night not happen?

3. What is the client missing that we're also missing? This could be as simple as forgetting that under some circumstances, the same icon is being used in a UI to mean something else. Or that an error box might obscure the important thing that just triggered the error. Or that six years ago, we created a special user level called "-1" that is a reserved level for employees in training, and users on that level can't write to certain tables. And asking the question whether we want them to be able to train on the new feature, and if so, whether there are exceptions to that which would open the permissions on the DB but restrict some operations in the middleware.

"What are we missing" is 95% of my job, and unit tests are useless if you don't know all the potential valid or invalid inputs.

1dom 6 days ago | parent | prev | next [-]

I think this is a fair and valuable comment. Only part I think could be more nuanced is:

> The fact that I wouldn't trust any LLM to touch any of my code in those real world cases makes me think that most people who are touting them are not, in fact, writing code at the same level or doing the same job I do. Or understand it very well.

I agree with this specifically for agentic LLM use. However, I've personally increased my code speed and quality with LLMs for sure using purely local models as a really fancy auto complete for 1 or 2 lines at a time.

The rest of your comment is good, bit the last paragraph to me reads like someone inexperienced with LLMs looking to find excuses to justify not being productive with them, when others clearly are. Sorry.

jlcummings 6 days ago | parent | prev | next [-]

Being effective with llm agents requires not just the ability to code or to appreciate nuance with libraries or business rules but to have the ability and proclivity of pedantry. Dad-splain everything always.

And to have boundless contextual awareness… dig a rabbit hole, but beware that you are in your own hole. At this point you can escape the hole but you have to be purposefully aware of what guardrails and ladders you give the agent to evoke action.

The better, more explicit guardrails you provide the more likely the agent is able to do what is expected and honor the scope and context you establish. If you tell it to use silverware to eat, be assured it doesn’t mean to use it appropriately or idiomatically and it will try eating soup with a fork.

Lastly don’t be afraid of commits and checkpoints, or to reject/rollback proposed changes and restate or reset the context. The agent might be the leading actor, but you are the director. When a scene doesn’t play out, try it again after clarification or changing camera perspective or lighting or lines, or cut/replace the scene entirely.

cmsj 6 days ago | parent | next [-]

I find that level of pedantry and hand-holding, to be extremely tedious and I frequently find myself just thinking fuck it, I'll write it myself and get what I want the first time.

skydhash 6 days ago | parent [-]

This. That’s why every programmer strive for a good architecture and write tests. When you have that and all your bug fixes and feature request are only a small amount of lines, that is pure bliss. Even if it requires hours of reading and designing. Anything is better than dumping lot of lines.

dingi 4 days ago | parent | prev [-]

Why would anyone bother at this point though? Tedious handholding and extra effort for code reviews. Just write the damn thing yourself.

etherealG 19 hours ago | parent [-]

Because once you figure out the correct way to handhold, you can automate it and the tediousness goes away.

It’s only tedious once per codebase or task, then you find the less tedious recipe and you’re done.

You can even get others to do the tedious part at their layer of abstraction so that you don’t have to anymore. Same as compilers, cpu design, or any other pet of the stack lower than the one you’re using.

gxs 6 days ago | parent | prev | next [-]

To be honest you sound super defensive, not just in a classic programmer when someone invades on their turf sort of way, but also in the classic way people who are reluctant to accept a new technology

This sentiment of, a human will always be needed, there’s no replacement for human touch, the stakes are too high, is as old as time

You just said, quite literally, that people leveraging LLMs to code are not doing it at your level - that’s borders on hubris

The fact of the matter is that like most tools, you get out of AI what you put into it

I know a lot of engineers and this pride, this reluctance to accept the help is super common

The best engineers on the other hand are leveraging this just fine, just another tool for them that speeds things up

geraldwhen 6 days ago | parent [-]

Hubris? The offshore team submitting 2000 line nonsense PRs from AI is reality.

We’re living it. We see it every day. The business leaders cannot be convinced that this isn’t making less skilled developers more productive.

gibbitz 3 days ago | parent | next [-]

Worth noting that there are business leaders who see high LOC and number of commits as metrics of good programmers. To them the 2000 LOC commits from offshore are proof that it's working. Sadly the proof that it's not will show in their sales and customer satisfaction if they keep producing their product long enough. For too long the business model in tech has been to get bought out so this doesn't often matter to business.

6 days ago | parent | prev [-]
[deleted]
danielbln 6 days ago | parent | prev [-]

I'm not sure what any of what you just wrote has to do with LLMs. If you use LLMs to rubber duck or write tests/code, then all of the things you mentioned should still apply. That last logical leap, the fact that _you_ wouldn't trust LLM to touch your code means that people who do aren't at the same level as you is a fallacy.