Remix.run Logo
palmotea 2 days ago

>> One important reason people like to write code is that it has well-defined semantics, allowing to reason about it and predict its outcome with high precision. Likewise for changes that one makes to code. LLM prompting is the diametrical opposite of that.

> You’re still allowed to reason about the generated output. If it’s not what you want you can even reject it and write it yourself!

You missed the key point. You can't predict and LLM's "outcome with high precision."

Looking at the output and evaluating it after the fact (like you describe) is an entirely different thing.

yunwal 2 days ago | parent [-]

For many things you can though. If I ask an LLM to create an alert in terraform that triggers when 10% of requests fail over a 5 minute period and sends an email to some address, with the html on the email looking a certain way, it will do exactly the same as if I looked at the documentation, and figured out all of the fields 1 by 1. It’s just how it works when there’s one obvious way to do things. I know software devs love to romanticize about our jobs but I don’t know a single dev who writes 90% meaningful code. There’s always boilerplate. There’s always fussing with syntax you’re not quite familiar with. And I’m happy to have an AI do it

palmotea 2 days ago | parent [-]

I think you're still missing the point. This cousin comment does a decent job of explaining it: https://news.ycombinator.com/item?id=46231510

yunwal 2 days ago | parent [-]

I don’t think I am. To me, it doesn’t have to be precise. The code is precise and I am precise. If it gets me what I want most of the time, I’m ok with having to catch it.