Remix.run Logo
manoDev 6 days ago

“AI” is great for coding in the small, it’s like having a powerful semantic code editor, or pairing with a junior developer who can lookup some info online quickly. The hardest part of the job was never typing or figuring out some API bullshit anyway.

But trying to use it like “please write this entire feature for me” (what vibe coding is supposed to mean) is the wrong way to handle the tool IMO. It turns into a specification problem.

dboreham 6 days ago | parent | next [-]

Yes, but in my experience actually no. At least not with the bleeding edge models today. I've been able to get LLMs to write whole features to the point that I'm quite surprised at the result. Perhaps I'm talking to it right (the new "holding it right"?). I tend to begin asking for an empty application with the characteristics I want (CLI, has subcommands, ...) then I ask it to add a simple feature. Get that working then ask it to enhance functionality progressively, testing as we go. Then when functionality is working I ask for a refactor (often it puts 1500 loc in one file, for example), doc, improve help text, and so on. Basically the same way you'd manage a human.

I've also been close to astonished at the capability LLMs have to draw conclusions from very large complex codebases. For example I wanted to understand the details of a distributed replication mechanism in a project that is enormous. Pre-LLM I'd spent a couple of days crawling through the code using grep and perhaps IDE tools, making notes on paper. I'd probably have to run the code or instrument it with logging then look at the results in a test deployment. But I've found I can ask the LLM to take a look at the p2p code and tell me how it works. Then ask it how the peer set is managed. I can ask it if all reachable peers are known at all nodes. It's almost better than me at this, and it's what I've done for a living for 30 years. Certainly it's very good for very low cost and effort. While it's chugging I can think about higher order things.

I say all this as a massive AI skeptic dating back to the 1980s.

manoDev 6 days ago | parent | next [-]

> I tend to begin asking for an empty application with the characteristics I want (CLI, has subcommands, ...) then I ask it to add a simple feature.

That makes sense, as you're breaking the task into smaller achievable tasks. But it takes an already experienced developer to think like this.

Instead, a lot of people in the hype train are pretending an AI can work an idea to production from a "CEO level" of detail – that probably ain't happening.

dingnuts 6 days ago | parent [-]

> you're breaking the task into smaller achievable tasks.

this is the part that I would describe as engineering in the first place. This is the part that separates a script kiddie or someone who "knows" one language and can be somewhat dangerous with it, from someone who commands a $200k/year salary, and it is the important part

and so far there is no indication that language models can do this part at. all.

for someone who CAN do the part of breaking down a problem into smaller abstractions, though, some of these models can save you a little time, sometimes, in cases where it's less effort to type an explanation to the problem than it is to type the code directly..

which is to say.. sometimes.

svachalek 6 days ago | parent | prev [-]

All the hype is about asking an LLM to start with an empty project with loose requirements. Asking it to work on a million lines of legacy code (inadequately tested, as all legacy code) with ancient and complex contracts is a completely different experience.

felipeerias 6 days ago | parent [-]

Very large projects are an area where AI tools can really empower developers without replacing them.

It is very useful to be able to ask basic questions about the code that I am working on, without having to read through dozens of other source files. It frees up a lot of time to actually get stuff done.

patrick451 5 days ago | parent [-]

AI makes so many mistakes, I cannot trust it with telling me the truth about how a large codebase works.

Gigachad 6 days ago | parent | prev | next [-]

I find this half state kind of useless. If I have to know and understand the code being generated, it's easier to just write it myself. The AI tools can just spit out function names and tools I don't know off the top of my head, and the only way to check they are correct is to go look up the documentation, and at that point I've just done the hard work I wanted to avoid.

Feels like a similar situation to self driving where companies want to insist that you should be fully aware and ready to take over in an instant when things go wrong. That's just not how your brain works. You either want to fully disengage, or be actively doing the work.

platevoltage 6 days ago | parent [-]

> The AI tools can just spit out function names and tools I don't know off the top of my head, and the only way to check they are correct is to go look up the documentation, and at that point I've just done the hard work I wanted to avoid.

This is exactly my experience, but I guess generating code with depreciated methods is useful for some people.

Chris_Newton 6 days ago | parent | prev [-]

It turns into a specification problem.

This, IMHO, is the critical point and why a lot of “deep” development work doesn’t benefit much from the current generation of AI tools.

Last week, I was dealing with some temporal data. I often find working in this area a little frustrating because you spend so much time dealing with the inherent traps and edge cases, so using an AI code generator is superficially attractive. However, the vast majority of my time wasn’t spent writing code, it was getting my head around what the various representations of certain time-based events in this system actually mean and what should happen when they interact. I probably wrote about 100 test cases next, each covering a distinct real world scenario, and working out how to parameterise them so the coverage was exhaustive for certain tricky interactions also required a bit of thought. Finally, I wrote the implementation of this algorithm that had a lot of essential complexity, which means code with lots of conditionals that needs to be crystal clear about why things are being done in a certain order and decisions made a certain way, so anyone reading it later has a fighting chance of understanding it. Which of those three stages would current AI tools really have helped with?

I find AI code generators can be quite helpful for low-level boilerplate stuff, where the required behaviour is obvious and the details tend to be a specific database schema or remote API spec. No doubt some applications consist almost entirely of this kind of code, and I can easily believe that people working on those find AI coding tools much more effective than I typically do. But as 'manoDev says in the parent comment, deeper work is often a specification problem. The valuable part is often figuring out the what and the why rather than the how, and so far that isn’t something AI has been very good at.

5 days ago | parent [-]
[deleted]