Remix.run Logo
windexh8er a day ago

> I am curious to know how you are coming to these conclusions.

What I have stated is what I have seen first hand and continue to see. They aren't conclusions, they are observations.

>I have been a computer programmer for over 30 years, and I have pretty solid evidence that I am good at it.

OK.

> I have been using AI to write some very capable, well written, well tested, novel software projects

That's great, I'm sure this is all true with the exception of "novel software projects". Any examples?

> Now, is it easy to use coding AIs to generate really bad code? Yes. Does that mean it is impossible to get them to generate good code? No, I don't think it is.

Sure. This is basically what I already said.

> Coding with AIs is just like any other type of coding, it takes skill and practice. Not everyone is able to create great code with AI, because you need to use it in the correct way.

There is no one correct way because LLMs are architecturally non-deterministic. You don't know how the LLM will respond for any given prompt.

> There are a lot of techniques that people have been discovering to get the AI to output better code. It is a very active field, and people are experimenting and coming up with frameworks and strategies to improve the quality. That work is paying dividends.

I never said LLMs didn't have a level of value, but it's not paying dividends if you take the true cost of LLMs. Frontier models are heavily subsidized at today's prices. Do you think Claude Code is worth $2k per month? $20k? Is increasing energy prices exponentially for people who don't care about software another one of these "dividends"? How do you quantify finite resources utilization vs generation of AI images? I'm curious.

> You can write very bad code with any language or tool. AI doesn't (yet!) allow non-coders to create great code, but it certainly can create great code in the hands of experts.

OK. But so then you're saying that this is a tool you need to have expertise in to use safely and effectively. Basically what I've already stated.

> "...great code in the hands of experts".

Anyone with the Internet who is an expert can create great code already. So your argument is that it saves experts time and you agree that AI can create poor code and insecure systems when left to "non-experts". But the part you're leaving out is that the AI won't tell the "non-experts" anything of the sort. How... Novel!