Remix.run Logo
Ballas 4 days ago

There is definitely a divide in users - those for which it works and those for which it doesn't. I suspect it comes down to what language and what tooling you use. People doing web-related or python work seem to be doing much better than people doing embedded C or C++. Similarly doing C++ in a popular framework like QT also yields better results. When the system design is not pre-defined or rigid like in QT, then you get completely unmaintainable code as a result.

If you are writing code that is/can be "heavily borrowed" - things that have complete examples on Github, then an LLM is perfect.

hn_throwaway_99 4 days ago | parent | next [-]

While I agree that AI assisted coding probably works much better for languages and use cases that have a lot more relevant training data, when I read comments from people who like LLM assisted coding vs. those that don't, I strongly get the impression that the difference has a lot more to do with the programmers than their programming language.

The primary difference I see in people who get the most value from AI tools is that they expect it to make mistakes: they always carefully review the code and are fine with acting, in some cases, more like an editor than an author. They also seem to have a good sense of where AI can add a lot of value (implementing well-defined functions, writing tests, etc.) vs. where it tends to fall over (e.g. tasks where large scale context is required). Those who can't seem to get value from AI tools seem (at least to me) less tolerant of AI mistakes, and less willing to iterate with AI agents, and they seem more willing to "throw the baby out with the bathwater", i.e. fixate on some of the failure cases but then not willing to just limit usage to cases where AI does a better job.

To be clear, I'm not saying one is necessarily "better" than the other, just that the reason for the dichotomy has a lot more to do with the programmers than the domain. For me personally, while I get a lot of value in AI coding, I also find that I don't enjoy the "editing" aspect as much as the "authoring" aspect.

paufernandez 4 days ago | parent | next [-]

Yes, and each person has a different perception of what is "good enough". Perfectionists don't like AI code.

skydhash 4 days ago | parent [-]

My main reason is: Why should I try twice or more, when I can do it once and expand my knowledge? It's not like I have to produce something now.

sgc 4 days ago | parent [-]

If it takes 10x the time to do something, did you learn 10x as much? I don't mind repetition, I learned that way for many years and it still works for me. I recently made a short program using ai assist in a domain I was unfamiliar with. I iterated probably 4x. Iterations were based on learning about the domain both from the ai results that worked and researching the parts that either seemed extraneous or wrong. It was fast, and I learned a lot. I would have learned maybe 2x more doing it all from scratch, but I would have taken at least 10x the time and effort to reach the result, because there was no good place to immerse myself. To me, that is still useful learning and I can do it 5x before I have spent the same amount of time.

It comes back to other people's comments about acceptance of the tooling. I don't mind the somewhat messy learning methodology - I can still wind up at a good results quickly, and learn. I don't mind that I have to sort of beat the AI into submission. It reminds me a bit of part lecture, part lab work. I enjoy working out where it failed and why.

skydhash 3 days ago | parent [-]

The fact is that most people skip learning about what works (learning is not cheap mentally). I’ve seen teammates just trying stuff (for days) until something kinda works instead of spending 30 mns doing research. The fact is that LLMs are good for producing something that looks correct, and waste the reviewer time. It’s harder to review something than writing it from scratch.

Learning is also exponential, the more you do it, the faster it is, because you may already have the foundations for that particular bit.

robenkleene 3 days ago | parent | prev | next [-]

> I strongly get the impression that the difference has a lot more to do with the programmers than their programming language.

The problem with this perspective is that anyone who works on more niche programming areas knows the vast majority of programming discussion online aren't relevant to them. E.g., I've done macOS/iOS programming most of my career, and I now do work that's an order of magnitude more niche than that, and I commonly see programmers saying thing like "you shouldn't use a debugger", which is a statement that I can't imagine a macOS or iOS programmer saying (don't get me wrong they're probably out there, I've just never met or encountered one). So you just become use to most programming conversations being irrelevant to your work.

So of course the majority of AI conversations aren't relevant to your work either, because that's the expectation.

I think a lot of these conversations are two people with wildly different contexts trying to communicate, which is just pointless. Really we just shouldn't be trying to participate in these conversations (the more niche programmers that is), because there's just not enough shared context to make communication effective.

We just all happen to fall under this same umbrella of "programming", which gives the illusion of a shared context. It's true there's some things that are relevant across the field (it's all just variables, loops, and conditionals), but many of the other details aren't universal, so it's silly to talk about them without first understanding the full context around the other persons work.

hn_throwaway_99 3 days ago | parent [-]

> and I commonly see programmers saying thing like "you shouldn't use a debugger"

Sorry, but who TF says that? This is actually not something I hear commonly, and if it were, I would just discount this person's opinion outright unless there were some other special context here. I do a lot of web programming (Node, Java, Python primarily) and if someone told me "you shouldn't use a debugger" in those domains I would question their competence.

robenkleene 3 days ago | parent [-]

E.g., https://news.ycombinator.com/item?id=39652860 (no specific comment, just the variety of opinions)

Here's a good specific example https://news.ycombinator.com/item?id=26928696

felipeerias 4 days ago | parent | prev | next [-]

It might boil down to individual thinking styles, which would explain why people tend to talk past each other in these discussions.

jappgar 4 days ago | parent | prev [-]

No one likes to hear it, but it comes down to prompting skill. People who are terrible at communicating and delegating complex tasks will be terrible at prompting.

It's no secret that a lot of engineers are bad at this part of the job. They prefer to work alone (i.e. without AI) because they lack the ability to clearly and concisely describe problems and solutions.

JackFr 4 days ago | parent [-]

This. I work with juniors who have no idea what a spec is, and the idea of designing precisely what a component should do, especially in error cases, is foreign to them.

One key to good prompting is clear thinking.

motorest 4 days ago | parent | prev | next [-]

> If you are writing code that is/can be "heavily borrowed" - things that have complete examples on Github, then an LLM is perfect.

I agree with the general premise. There is however more to it than "heavily borrowed". The degree to which a code base is organized and structured and curated plays as big of a role as what framework you use.

If your project is a huge pile of unmaintainable and buggy spaghetti code then don't expect a LLM to do well. If your codebase is well structured, clear, and follows patterns systematically the of course a glorified pattern matching service will do far better in outputting acceptable results.

There is a reason why one of the most basic vibecoding guidelines is to include a prompt cycle to clean up and refactor code between introducing new features. LLMs fare much better when the project in their context is in line with their training. If you refactor your project to align it with what a LLM is trained to handle, it will do much better when prompted to fill in the gaps. This goes way beyond being "heavily borrowed".

I don't expect your average developer struggling with LLMs to acknowledge this fact, because then they would need to explain why their work is unintelligible to a system trained on vast volumes of code. Garbage in, garbage out. But who exactly created all the garbage going in?

pydry 4 days ago | parent | prev | next [-]

I suspect it comes down to how novel the code you are writing is and how tolerant of bugs you are.

People who use it to create a proof of concept of something that is in the LLM training set will have a wildly different experience to somebody writing novel production code.

Even there the people who rave the most rave about how well it does boilerplate.

jstummbillig 4 days ago | parent | prev | next [-]

> When the system design is not pre-defined or rigid like

Why would a LLM be any worse building from language fundamentals (which it knows, in ~every language)? Given how new this paradigm is the far more obvious and likely explanation seems to be: LLM powered coding requires somewhat different skills and strategies. The success of each user heavily depends on their learning rate.

PUSH_AX 4 days ago | parent | prev [-]

I think there are still lots of code “artisans” who are completely dogmatic about what code should look like, once the tunnel vision goes and you realise the code just enables the business it all of a sudden becomes a velocity God send.

gtsop 4 days ago | parent | next [-]

Two years in and we are waiting to see all you people (who are free of our tunnel vision) fly high with your velocity. I don't see anyone, am I doing something wrong?

Your words predict an explosion of unimaginary magnitude for new code and for new buisnesses. Where is it? Nowhere.

Edit: And dont start about how you vibed a SaaS service, show income numbers from paying customers (not buyouts)

hn_throwaway_99 4 days ago | parent | next [-]

There was this recent post about a Cloudflare OAuth client where the author checked in all the AI prompts, https://news.ycombinator.com/item?id=44159166.

The author of the library (kentonv) comments in the HN thread that he said it took him a few days to write the library with AI help, while he thinks it would have taken weeks or months to write manually.

Also, while it may be technically true we're "two years in", I don't think this is a fair assessment. I've been trying AI tools for a while, and the first time I felt "OK, now this is really starting to enhance my velocity" was with the release of Claude 4 in May of this year.

ath92 4 days ago | parent [-]

But that example is of writing a green field library that deals with an extremely well documented spec. While impressive, this isn’t what 99% of software engineering is. I’m generally a believer/user but this is a poor example to point at and say “look, gains”.

PUSH_AX 4 days ago | parent | prev [-]

Do you have some magical insight into every codebase in existence? No? Ok then…

gtsop 3 days ago | parent | next [-]

No i don't but by your post it seems like you do. Show us, that is all i request.

PUSH_AX 3 days ago | parent [-]

I have insight into enough code bases to know its a non zero number. Your logic is bizarre, if you’ve never seen a kangaroo would you just believe they don’t exist?

gtsop 3 days ago | parent [-]

Show us the numbers, stop wasting our time. NUMBERS.

Also, why would I ever believe kangaroos exist if I haven't seen any evidence of them? this is a fallacy. You are portraying the healthy skepticism as stupid because you already know kangaroos exist.

PUSH_AX 3 days ago | parent [-]

What numbers? It doesn’t matter if it’s one or a million, it’s had a positive impact on the velocity of a non zero number of projects. You wrote:

> Two years in and we are waiting to see all you people (who are free of our tunnel vision) fly high with your velocity. I don't see anyone, am I doing something wrong?

Yes is the answer. I could probably put it in front of your face and you’d reject it. You do you. All the best.

ceejayoz 4 days ago | parent | prev [-]

That’s hardly necessary.

Have we seen a noticeably increased amount of newly launched useful apps?

PUSH_AX 4 days ago | parent [-]

Why is useful a metric? This is about software delivery, what one person deems useful is subjective

nobleach 3 days ago | parent | next [-]

Perhaps I'm misreading the person to whom you're replying, but usefullness, while subjective, isn't typically based on one person's opinion. If enough people agree on the usefullness of something, we as a collective call it "useful".

Perhaps we take the example of a blender. There's enough need to blend/puree/chop food-like-items, that a large group of people agree on the usefullness of a blender. A salad-shooter, while a novel idea, might not be seen as "useful".

Creating software that most folks wouldn't find useful still might be considered "neat" or "cool". But it may not be adding anything to the industry. The fact that someone shipped something quickly doesn't make it any better.

PUSH_AX 3 days ago | parent [-]

Ultimately, or at least in this discussion, we should decouple the software’s end use from the question of whether it satisfies the creator’s requirements and vision in a safe and robust way. How you get there and what happens after are two different problems.

darkwater 4 days ago | parent | prev [-]

> Why is useful a metric?

"and you realise the code just enables the business it all of a sudden becomes a velocity God send."

If a business is not useful, well, it will fail. So, so much autogenerated code for nothing.

PUSH_AX 4 days ago | parent | next [-]

I see, I guess every business I haven’t used personally, because it wasn’t useful to me, has failed…

Usefulness isn’t a good metric for this.

imiric 4 days ago | parent | prev [-]

It's not for nothing. When a profitable product can be created in a fraction of the time and effort previously required, the tool to create it will attract scammers and grifters like bees to honey. It doesn't matter if the "business" around it fails, if a new one can be created quickly and cheaply.

This is the same idea behind brands with random letters selling garbage physical products, only applied to software.

imiric 4 days ago | parent | prev | next [-]

The issue is not with how code looks. It's with what it does, and how it does it. You don't have to be an "artisan" to notice the issues moi2388 mentioned.

The actual difference is between people who care about the quality of the end result, and the experience of users of the software, and those who care about "shipping quickly" no matter the state of what they're producing.

This difference has always existed, but ML tools empower the latter group much more than the former. The inevitable outcome of this will be a stark decline of average software quality, and broad user dissatisfaction. While also making scammers and grifters much more productive, and their scams more lucrative.

Buttons840 4 days ago | parent | next [-]

Certainly billions of people's personal data will be leaked, and nobody will be held responsible.

airtonix 4 days ago | parent | prev [-]

[dead]

Buttons840 4 days ago | parent | prev | next [-]

I'm not a code "artisan", but I do believe companies should be financially responsible when they have security breaches.

cowl 4 days ago | parent | prev [-]

There are very good reason that code should look a certain way and it comes from years of experience and the fact that code is written once but read and modified much more.

When the first bugs come up you see that the velocity was not god sent and you end up hiring one of the many "LLM code fixer" companies that are poping up like mushrooms.

PUSH_AX 4 days ago | parent [-]

You’re confusing yoloing code into prod and using ai to increase velocity while ensuring it functions and is safe.

habinero 3 days ago | parent [-]

No, they're not. It's critically important if you're part of an engineering team.

If everyone does their own thing, the codebase rapidly turns to mush and is unreadable.

And you need humans to be able to read it the moment the code actually matters and needs to stand up to adversaries. If you work with money or personal information, someone will want to steal that. Or you may have legal requirements you have to meet.

It matters.

PUSH_AX 3 days ago | parent [-]

You’ve made a sweeping statement there, there are swathes of teams working in startups still trying to find product market fit. Focusing on quality in these situations is folly, but that’s not even the point. My point is you can ship quality to any standard using an llm, even your standards. If you can’t that’s a skill issue on your part.