Remix.run Logo
zparky 3 days ago

It's been blowing my mind reading HN the past year or so and seeing so many comments from programmers that are excited to not have to write code. It's depressing.

IanCal 3 days ago | parent | next [-]

There are three takes that I think are not depressing:

* Being excited to be able to write the pieces of code they want, and not others. When you sit down to write code, you do not do everything from scratch, you lean on libraries, compilers, etc. Take the most annoying boilerplate bit of code you have to write now - would you be happy if a new language/framework popped up that eliminated it?

* Being excited to be able to solve more problems because the code is at times a means to an end. I don't find writing CSS particularly fun but I threw together a tool for making checklists for my kids in very little time using llms and it handled all of the css for printing vs on the screen. I'm interested in solving an optimisation issue with testing right now, but not that interested in writing code to analyse test case perf changes so the latter I got written for me in very little time and it's great. It wasn't really a choice of me or machine, I do not really have the time to focus on those tasks.

* Being excited that others can get the outcomes I've been able to get for at least some problems, without having to learn how to code.

As is tradition, to torture a car analogy, I could be excited for a car that autonomously drives me to the shops despite loving racing rally cars.

wakawaka28 3 days ago | parent | next [-]

Those are all good outcomes, up to a point. But if this stuff works TOO well, most or maybe all of us will have to start looking at other career options. Whatever autonomy you think you have in deciding what the AI does, that can ultimately be trained as well, and it will be the more people use it.

I personally don't like it when others who don't know how to code are able to get results using AI. I spent many years of my life and a small fortune learning scarce skills that everyone swore would be the last to ever be automated. Now, in a cruel twist of fate, those skills are being automated and there is seemingly no worthwhile job that can't be automated given enough investment. I am hopeful because the AI still has a long way to go, but even with the improvements it currently has, it might ultimately destroy the tech industry. I'm hoping that Say's Law proves true in this case, but even before the AI I was skeptical that we would find work for all the people trying to get into the software industry.

badsectoracula 2 days ago | parent [-]

> I personally don't like it when others who don't know how to code are able to get results using AI.

Sounds like for many programmers AI is the new Visual Basic 6 :-P

wakawaka28 2 days ago | parent [-]

It's worse than that lol. At least with VB 6 and similar scripting languages, there is still code getting written. Now we have complete morons who think they're software developers because they got some AI to shit out an app for them. This is going to affect how people view the profession of software engineering all around.

ares623 3 days ago | parent | prev [-]

Except in this case you won't be able to afford going to the shops anymore. Or even if the shops will still be around. What use is an autonomous car if you can't use it.

zahlman 3 days ago | parent | prev | next [-]

I suspect, rather strongly, that what really specifically wears programmers down is boilerplate.

AI is addressing that problem extremely well, but by putting up with it rather than actually solving it.

I don't want the boilerplate to be necessary in the first place.

projektfu 3 days ago | parent | next [-]

Or, for me, yak shaving. I start a project with enthusiasm and then 8 hours later I'm debugging an nginx config file or something rather than working on the core project. AI gets a lot of that out of the way if you let it, and you can at least let it grind on that stuff while you think about other things.

zahlman 3 days ago | parent [-]

For me, the yak shaving is the part where I get the next project idea...

3 days ago | parent | prev [-]
[deleted]
seanmcdirmid 3 days ago | parent | prev | next [-]

It is fun. It takes some skill to organize a pipeline to generate code that would be tedious to write and maintain otherwise. You are still writing stuff to instruct the computer, but now you have something taking natural language instructions and generating code and code test assets.

There might have been people who were happy to write assembly that got bummed about compilers. This AI stuff judt feels like a new way to write code.

johnnyaardvark a day ago | parent [-]

I've heard this take a few times, but I'm not convinced using general language is the new way to write code (beyond small projects).

Inevitably AI will writes things in ways you don't intend. So now you have to prompt it to change and hope it gets it right. Oh, it didn't. Prompt it again and maybe this time will work. Will it get it right this time? And so on.

It's so good at a lot of things, but writing out whole features or apps in my experience seems good at first, but then it turns out to be a time sync of praying it will figure it out on this next prompt.

Maybe it's a skill issue for me, but I've gotten the most efficiency out of having it review code, pair with it on ideas and problems, etc. rather than actually writing the majority of code.

seanmcdirmid 19 hours ago | parent [-]

Until you've actually done it yourself, it will probably sound like vapor ware. The only question is how much energy are you willing to spend, in terms of actual energy (because you are making more calls to the AI) and yes, setting up your development pipeline with N LLM calls.

It is really like micro-managing a very junior very forgetful dev but they can read really fast (and they mostly remember what they read for a few minutes at least, they actually know more about something than you do if they have a manual about it on hand). Of course, if its just writing the code once, you don't bother with the junior dev and write the code yourself. But if you want long term efficiency, you put the time into your team (and team here is the AI).

youoy 2 days ago | parent | prev | next [-]

I think that the main missunderstanding is that we used to think programming=coding, but this is not the case. LLMs allow people to use natural language as a programming language, but you still need to program. As with every programing language, it requires you to learn how to use it.

Not everyone needs to be excited about LLMs, in the same way that C++ developers dont need to be excited about python.

xyzwave 2 days ago | parent | prev | next [-]

I hate writing code, but love debugging. LLMs have been a godsend for banging out boilerplate and getting things 95% of the way there. Now I spend most of my time on the hard stuff (debugging, refactoring), while building things that would have taken weeks in days. It’s honestly made the act of building software more enjoyable and rewarding.

xnx 3 days ago | parent | prev | next [-]

Some carpenters like to make cabinets. Some just like to hammer nails.

solumunus 2 days ago | parent | prev | next [-]

Do you really think the creative or intellectual element of programming is the tapping of keys? I don't understand this at all. I enjoy solving problems and creating elegant solutions. I'm spending less time tapping keys and more time engineering solutions. If tapping keys is the most fun part for you, then that's fine! But let's not pretend THAT is the critical part of software engineering. Not to mention, it's not all or nothing. The options aren't writing code or not writing code. You can selectively not write any boring code and write 100% of the bits you find interesting or care about. If an LLM is failing to deliver what is in my minds eye then I simply step in and make sure the code is quality... I'm doing more and better software engineering, that's why I'm happy, that's the bit that scratches my itch.

DevDesmond 3 days ago | parent | prev [-]

Perhaps consider that I still think coding by prompting is just another layer of abstraction on top of coding.

I'm my mind, writing the prompt that generates the code is somewhat analogous to writing the code that generates the assembly. (Albeit, more stochastically, the way psychology research might be analogous to biochemistry research).

Different experts are still required at different layers of abstraction, though. I don't find it depressing when people show preference for working at different levels of complexity / tooling, nor excitement about the emergence of new tools that can enable your creativity to build, automate, and research. I think scorn in any direction is vapid.

layer8 3 days ago | parent [-]

One important reason people like to write code is that it has well-defined semantics, allowing to reason about it and predict its outcome with high precision. Likewise for changes that one makes to code. LLM prompting is the diametrical opposite of that.

youoy 2 days ago | parent | next [-]

It completely depends on the way you prompt the model. Nothing prevents you from telling it exactly what you want, to the level of specifying the files and lines to focus on. In my experience anything other than that is a recepy for failure in sufficiently complex projects.

layer8 2 days ago | parent [-]

Several comments can be made here: (1) You only control what the LMM generates to the extent that you specify precisely what it should generate. You cannot reasons about what it will generate for what you don't specify. (2) Even for what you specify precisely, you don't actually have full control, because the LLM is not reliable in a way you can reason about. (3) The more you (have to) specify precisely what it should generate, the less benefit using the LLM has. After all, regular coding is just specifying everything precisely.

The upshot is, you have to review everything the LLM generates, because you can't predict the qualities or failures of its output. (You cannot reason in advance about what qualities and failures it definitely will or will not exhibit.) This is different from, say, using a compiler, whose output you generally don't have to review, and whose input-to-output relation you can reason about with precision.

Note: I'm not saying that using an LLM for coding is not workable. I'm saying that it lacks what people generally like about regular coding, namely the ability to reason with absolute precision about the relation between the input and the behavior of the output.

yunwal 3 days ago | parent | prev [-]

You’re still allowed to reason about the generated output. If it’s not what you want you can even reject it and write it yourself!

palmotea 2 days ago | parent [-]

>> One important reason people like to write code is that it has well-defined semantics, allowing to reason about it and predict its outcome with high precision. Likewise for changes that one makes to code. LLM prompting is the diametrical opposite of that.

> You’re still allowed to reason about the generated output. If it’s not what you want you can even reject it and write it yourself!

You missed the key point. You can't predict and LLM's "outcome with high precision."

Looking at the output and evaluating it after the fact (like you describe) is an entirely different thing.

yunwal 2 days ago | parent [-]

For many things you can though. If I ask an LLM to create an alert in terraform that triggers when 10% of requests fail over a 5 minute period and sends an email to some address, with the html on the email looking a certain way, it will do exactly the same as if I looked at the documentation, and figured out all of the fields 1 by 1. It’s just how it works when there’s one obvious way to do things. I know software devs love to romanticize about our jobs but I don’t know a single dev who writes 90% meaningful code. There’s always boilerplate. There’s always fussing with syntax you’re not quite familiar with. And I’m happy to have an AI do it

palmotea 2 days ago | parent [-]

I think you're still missing the point. This cousin comment does a decent job of explaining it: https://news.ycombinator.com/item?id=46231510

yunwal 2 days ago | parent [-]

I don’t think I am. To me, it doesn’t have to be precise. The code is precise and I am precise. If it gets me what I want most of the time, I’m ok with having to catch it.