Remix.run Logo
claytongulick a day ago

> (We sure as hell aren’t there yet, but that’s a possibility)

What makes you think so?

Most of the stuff I've read, my personal experience with the models, and my understanding of how these things work all point to the same conclusion:

AI is great at summarization and classification, but totally unreliable with generation.

That basic unreliablity seems to fundamental to LLMs, I haven't seen much improvement in the big models, and a lot of the researchers I've read are theorizing that we're pretty close maxing out what scaling training and inference will do.

Are you seeing something else?

gfdvgfffv a day ago | parent | next [-]

I have used Claude to write a lot of code. I am however already a programmer, one with ~25 years of experience. I’ve also lead organizations of 2-200 people.

So while I don’t think the world I described exists today — one where non-programmers, with neither programming nor programmer-management experience, use these tools to build software — I don’t a priori disbelieve its possibility.

senordevnyc a day ago | parent | prev [-]

This seems really vague. What does "totally unreliable" mean?

If you mean that a completely non-technical user can't vibe code a complex app and have it be performant, secure, defect-free, etc, then I agree with you. For now. Maybe for a long time, we'll see.

But right now, today, I'm a professional software engineer with two decades of experience and I use Cursor and Opus to reliably generate code that's on par with the quality of what I can write, at least 10x faster than I can write it. I use it to build new features, explore the codebase, refactor existing features, write documentation, help with server management and devops, debug tricky bugs, etc. It's not perfect, but it's better than most engineers I've worked with in my career. It's like pair programming with a savant who knows everything, some of which is a little out of date, who has intermediate level taste. With a tiny bit of steering, we're an incredibly productive duo.

conartist6 a day ago | parent | next [-]

I know the tech is here to stay, and the best parts of it are where it provides accessibility and tears down barriers to entry.

My work is to make sure that you don't need to reach for AI just because human typing speed is limited.

I love to think in terms of instruments versus assistants: an assistant is unpredictable but easy to use. It tries to guess what you want. An instrument is predictable but relatively harder to use. It has a skill curve and perhaps a skill cap. The purpose of an instrument is to directly amplify the expressive power of its user or player through predictable, delicately calibrated responses.

claytongulick 8 hours ago | parent | prev [-]

Maybe we're working on different things?

My experience has been much worse. Random functions with no purpose, awful architecture with no theory of mind, thousands of lines of comprehension debt, bugs that are bizarre and difficult to track down and reason about...

This coupled with the occasional time when it "gets it right".

Those moments make me feel like I saved time, but when I truly critically look at my productivity, I see a net decline overall, and I feel myself getting dumber and losing my ability to come up with creative solutions.