Remix.run Logo
ken47 8 months ago

The hype levels are so overwhelming that AI coding could never hope to meet them. I've tried having a highly-ranked AI coding app write unit tests for a relatively complex codebase. 80% of the generated test cases failed. But an experienced human such as myself could use those as a starting point since it took care of some of the tedious boilerplate. It genuinely saved me some time and annoyance, but could never hope to replace even the lowliest competent junior dev.

That's what AI is good for right now - boilerplate acceleration. But that's clearly not enough to drive the immense transfers of capital that this high ecosystem demands.

bee_rider 8 months ago | parent | next [-]

I do somewhat worry that AI will harm language development.

Boilerplate… is bad, right? If our languages were more elegant, we’d need less of it. It is necessary because things don’t have sane defaults or sufficient abstraction.

In particular, if the AI is able to “guess” what boilerplate you wanted, that boilerplate ought to be knowable beforehand (because the AI is not actually guessing, it is a deterministic set of rules where you type things and programs come out; it is a programming language, albeit a weirdly defined one).

camdenreslink 8 months ago | parent | next [-]

I think there will be an ossification of current languages/frameworks/libraries. The technologies that already have a large amount of documentation, tutorials, example code, etc. will have a huge advantage over a brand new language because all of that content will be in the pretraining for all LLMs. Would you rather use JavaScript/TypeScript and Python (which will work relatively well with LLMs), or some brand new thing that will barely work at all with LLMs.

Tokumei-no-hito 8 months ago | parent [-]

tbf that applies pre-AI as well.

johnnyanmac 8 months ago | parent | prev [-]

>Boilerplate… is bad, right?

It's tough to say in a vacuum. Generally, the lower level you go, the more "boilerplate" you'll need. That cost of iteration is to give such programmers more control over the task.

So python may sneer at such boilerplate. Whereas Rust would prefer being more verbose.

ookblah 8 months ago | parent | prev | next [-]

I'm finding this as well. I can't trust it to go beyond boilerplate or PoC's or I risk lagging behind in understanding and actually slowing down in the long term. Best workflow is to tackle silo'd problems and iterate on them with LLM.

I will say in terms of fixing boilerplate or doing tedious work it's a godsend. The ability to give it literally garbage formatted content and return something relatively structured and coherent is great. Some use cases I find:

- Rapidly prototype/scaffold a new feature. I can do what takes me hours/days in magnitudes of less time and saving my brain energy lol. 90% of it is usually garbage, but I just need a PoC or direction and I can sniff out what approach doesn't work (goes with the above of not doing too much at at time)

- App typically has a lot of similarly structured sections that can't be DRY for whatever reason. Fix in one area, feed to LLM how to solve the problem and then apply it to X number of other sections. Boom, edits 20 files and saves me tedious hours of doing it manually and potentially screwing it up.

- Run logs and sections of code in an initial bug to see if I'm missing something obvious up front. Helps dive into the right areas.

- Transform garbage content into something else when I can't be bothered to write a complicated regex or script for.

Workaccount2 8 months ago | parent | prev | next [-]

AI is to SWEs what self-driving cars (or like features on new cars) are to taxi drivers. A neat thing that may make some basic things a little easier.

But it's the wrong perspective. Think instead what a self driving car is to someone who cannot drive. It doesn't matter if it sometimes disengages to have an operator step in or can only handle basic local trips. It's a total game changer. AI is unlocking computers for people who have never written a line of code in their life.

ehnto 8 months ago | parent | next [-]

> AI is unlocking computers for people who have never written a line of code in their life.

I don't think I can dispute the claim, but it feels more like someone who can't build a house being able to do it now that they have YouTube tutorials. Unless the person was already quite smart and competent, the house will probably have significant structural issues.

Is that a bad thing? For housing, governments the world over seem to agree that it is. But coding has never had a real attempt at regulation. You're going to end up with "vibe coded" production code handling people's personal and financial information, and that is genuinely a bad idea. A novice will not spot security issues, and AI will happily produce them.

firesteelrain 8 months ago | parent | next [-]

I disagree that coding doesn’t have regulation. If you have never developed code in a professionally regulated industry such as Airworthiness then you haven’t been exposed yet to an area that requires rigorous process. There are regulated areas where software is regulated.

I have DIY’d an addition onto my house with professionally architected blueprints and engineering seal. During various stages, I would call the City who would send code inspection officials to incrementally sign off on my project’s progress. Other than pouring a new slab of concrete and electrical, I built it all myself to code. I followed YouTube tutorials.

My point is that DIY isn’t the issue - lack of oversight is. With standards, expert input, and review processes, even non-experts can safely build. AI-assisted coding needs the same approach.

trog 8 months ago | parent [-]

All true but tell the average programmer that you think their industry should be regulated and they should potentially be held liable for their code.

This is not a popular opinion on software development circles - unless you're already in one of those regulated fields, like where a software engineer (a literal accredited engineer) is required.

But it's been an increasingly common talking point from a lot of experts. Bruce Schneier writes about it a lot - he convinced me long ago that our industry is pretty pathetic when it comes to holding corporations liable for massive security failures, for example.

firesteelrain 8 months ago | parent [-]

We have to mature as an industry. Things like not staying up to date on third party dependencies, not including cybersecurity as part of the build pipeline, lack of static and dynamic analysis, not encrypting at rest secrets, etc

It is already costing millions of dollars and it’s just accepted.

schwartzworld 8 months ago | parent | prev [-]

I couldn’t build a house, but I did learn a lot of smaller skills I would have otherwise called a pro in over. Turns out changing a toilet is pretty much putting it in place and screwing on the pipe, and you can program keys to my Xterra by toggling certain controls in a certain order.

I wouldn’t expect a vibe coder to build a full featured app on vibes alone and produce a quality codebase, but for smaller tasks and apps it is just fine.

threeseed 8 months ago | parent | prev | next [-]

Your analogy makes no sense. There is no equivalent of an operator who will fix your code if it fails.

You're asking someone with zero coding experience to jump in and fix bugs so niche or complex that the LLM is incapable of doing it.

johnnyanmac 8 months ago | parent [-]

Sure. Who's gonna do it though? This whole year's theme is in fact about de-regulation.

cjfd 8 months ago | parent | prev | next [-]

"Think instead what a self driving car is to someone who cannot drive."

It would be a very dangerous thing if said self-driving car crashes ten times every ride.

atmavatar 8 months ago | parent | prev | next [-]

> AI is unlocking computers for people who have never written a line of code in their life.

And this is why the holodeck tries to kill its occupants in half the episodes in which it is featured.

johnnyanmac 8 months ago | parent | prev [-]

>It doesn't matter if it sometimes disengages to have an operator step in

I'd say that breaks the entire concept of "self driving" at that point. If people are fine with some of those "actually Indian" stories where supposedly AI powered tools turned out to have a significant human element to it, why not just go back to the taxi? You clearly don't care about the tech, you care about being driven to a destination.

closewith 8 months ago | parent | prev | next [-]

I'd like to know which app and model you were using, along with the prompts.

We have had a steep learning curve in prompt preparation (what we're doing is certainly not engineering), but Claude Code is now one-shotting viable PRs in our legacy codebases that are, well, good.

Saying LLMs are only good for boilerplate acceleration is so far from my experience that it sounds absurd.

dbbk 8 months ago | parent [-]

LLMs don’t even understand fucking TypeScript, which you would expect a computer program to be able to understand. I can’t get it to write valid Drizzle code to save my life, it will confidently hallucinate method imports that don’t even exist.

gotimo 8 months ago | parent | next [-]

fucking real, i've had claude code make obvious syntax errors in C# by declaring a tuple as `var (a, string b)` which i thought we were past.

closewith 8 months ago | parent | prev [-]

The question is which LLM, invoked how?

Claude Code refactored numerous projects for us into TS, often one-shotting it. Saying LLMs don't understand TS (which may be true, in that LLMs questionably understand anything) says more about your perception than model and agent abilities.

camdenreslink 8 months ago | parent | next [-]

I have also had a really hard time getting Claude and Gemini to create valid TypeScript in somewhat complex legacy projects. Sometimes it will do the most kludgey things to sort of make it work (things a human developer would never consider acceptable).

dbbk 8 months ago | parent | prev [-]

Right, LLMs don't understand TS, because they're not integrated with it. When they come across something they don't know, they just start hallucinating, and don't even verify if it's actually valid (because they can't)

closewith 8 months ago | parent [-]

LLMs can't, but agents can. They can read documentation into context, verify code, compile, use analysis tools, and run tests.

Hallucinations do occur, but they're becoming more rare (especially if you prompt to the strengths of the model and provide context) and tests catch them.

DonHopkins 8 months ago | parent | prev | next [-]

There's a difference between AI assisted "vibe coding" and what I’ll call (for now) "craft coding."

This isn’t a movement or a manifesto. I’m not trying to coin a term or draw a line in the sand. I just want to articulate something I care about, now that AI-assisted coding is becoming so common.

The phrase “vibe coding” already is a "thing", if not well defined. It’s been tossed around to describe a casual, improvisational approach to programming with AI tools. Sometimes it’s playful and exploratory - which is fine in personal, low-stakes settings. But when vibe coding crosses into production software or educational tools without critical thought and human review, it can become actively harmful.

I’ve seen definitions of vibe coding that outright celebrate not reading the generated code. Or that dismiss the need to understand what the code does — as long as it runs. That’s where I draw a hard line. If you’re unwilling to engage with what you’re building, you’re not just missing the point — you’re potentially creating brittle, inscrutable systems that no one can maintain, not even you.

It’s the kind of anti-intellectual aversion to thinking for yourself that leads you to Vibe Code a CyberTruck of an app: an aggressive, self-satisfied, inexplicably pointy prison piss pan.

It plows along mindlessly — overconfidently smashed, shot at, and shattered on stage, problems waved off with “a little room for improvement” and “we’ll fix it in post”; then it’s pushed straight into production, barreling under “Full Self-Delusion” mode through crowds of innocent bystanders, slamming at full Boring speed into a tunnel painted on the face of a cliff, bursting into flames, and finally demanding a monthly subscription to extinguish itself.

Vibe Coded CyberApps are insecure thin-skinned auto-crashing cold-rolled steel magnets for hackers, bots, vandals, and graffiti artists.

It’s the kind of city planning where you paint a tunnel directly onto the side of a cliff, floor it like Wile E. Musk, and trust that the laws of physics — or software engineering — will graciously suspend themselves for you.

It's where you slap a “FREE TOLL” sign on a washed-out bridge and call it a feature, not a bug.

By contrast, what I’m calling “craft coding” is about intentionality, comprehension, and coherence. It’s about treating code as something more than a means to an end — something worth shaping with care. That doesn’t mean it’s always elegant or beautiful. Sometimes it’s messy. But even when it’s messy, it’s explainable. You can reason about it. You can teach with it. You can read it six months later and still understand why it’s there.

Craft coding doesn’t require you to be an expert. It requires you to care. It’s a mindset that values:

Understanding what your code does.

Naming things clearly.

Keeping code, comments, and documentation in sync.

Being able to explain a design decision — even if that decision is "this is temporary and kind of gross, but here’s why".

Leaving the campsite cleaner than you found it.

Craft coding has been around a long time before AI-assisted coding. I've been fortunate to read the code of and work with some great Craft Coders. But in order to learn from great Craft Coders, you've got to be willing and eager to read other people's code and documentation, not repelled and appalled by the notion.

But vibe coding has also been around since before the time of LLMs. Tools like Snap! (and before that, Logo) encouraged exploratory, playful, improvisational approaches to programming.

Kids learn a lot by poking around and building things with little upfront planning. That’s not a bad thing — in fact, it’s a great on-ramp to coding. Snap! supports a kind of vibe coding that’s deeply aligned with constructionist education: learning by making, reflecting, iterating.

The same vibe exists in educational simulations like Factorio, SimCity/Micropolis, and The Sims — call it vibe space industry, vibe city planning, vibe architecture, or vibe parenting. You drop some zones, throw up some walls, dig out a swimming pool, tweak a slider, pop out some babies, see what happens. It’s empowering and often inspiring.

But you don’t want to live in a city you vibe-mayored in SimCity, with a vibe-tunneled Boring Company loop, or ride a vibe-planned HyperLoop, or board a vibe-engineered rocket operating under “Full Self-Delusion” mode, held together with PowerPoint and optimism, headed for a Rapid Unscheduled Disassembly.

The road from vibe coding to craft coding is a natural one. You start by exploring. Then you get curious. You want to understand more. You want your code to be shareable, readable, fixable. You want to build things you’re proud of, and that others can build on.

This is especially relevant now because AI-assisted coding is amplifying both the good and the bad. Tools like Cursor and Copilot can accelerate comprehension, or they can accelerate incoherence. They can help you learn faster, or help you skip learning entirely. It depends on how you use them.

But used with intention, LLMs can support craft coding, as a "coherence engine" rather than a "chaos reactor". They can serve as a kind of coherence engine — helping you align code with documentation, keep multi-dialect implementations in sync, and build multi-resolution natural language explanations of your design. They’re not just code generators. They’re context managers, language translators, spaghetti groomers, diff explainers, OCD style critics, grammar police, relentless lint pickers, code and documentation searchers and summarizers, and conversation partners. And if you treat them that way — as tools to reinforce clarity and coherence — they can be an extraordinary asset.

So this isn’t about gatekeeping. I’m not saying vibe coders aren’t welcome. I’m saying: if you’re interested in the practice of craft coding — in learning, building, and maintaining systems that make sense — then you’ll probably want something more than vibes.

And if you’ve got a better name than “craft coding,” I’m all ears. But what matters isn’t the name. It’s the practice.

paradite 8 months ago | parent | prev [-]

I think you should try more tools and use cases.

Yes some of the current AI coding tools will fail at some use cases and tasks, but some others tools might give you good results. For example, Devin is pretty bad at some trivial frontend tasks in my testing, but cursor is way better.

I have good success with web development and JavaScript in particular, on several large codebases.

I also built my own AI coding and eval tools that suits my needs, though cursor has largely replaced the AI coding tool that I built.