Remix.run Logo
rhdunn 3 days ago

I use coding LLMs as a mix of:

1. a better autocomplete -- here the LLM models can make mistakes, but on balance I've found this useful, especially when constructing tests, writing output in a structured format, etc.;

2. a better search/query tool -- I've found answers by being able to describe what I'm trying to do where a traditional search I have to know the right keywords to try. I can then go to the documentation or search if I need additional help/information;

3. an assistant to bounce ideas off -- this can be useful when you are not familiar with the APIs or configuration. It still requires testing the code, seeing what works, seeing what doesn't work. Here, I treat it in the same way as reading a blog post on a topic, etc. -- the post may be outdated, may contain issues, or may not be quite what I want. However, it can have enough information for me to get the answer I need -- e.g. a particular method which I can then consult docs (such as documentation comments on the APIs) etc. Or it lets be know what to search on Google, etc..

In other words, I use LLMs as part of the process like with going to a search engine, stackoverflow, etc.

Sohcahtoa82 2 days ago | parent [-]

> a better autocomplete

This is 100% what I use Github Copilot for.

I type a function name and the AI already knows what I'm going to pass it. Sometimes I just type "somevar =" and it instantly correctly guesses the function, too, and even what I'm going to do with the data afterwards.

I've had instances where I just type a comment with a sentence of what the code is about to do, and it'll put up 10 lines of code to do it, almost exactly matching what I was going to type.

The vibe coders give AI-code generation a bad name. Is it perfect? Of course not. It gets it wrong at least half the time. But I'm skilled enough to know when it's wrong in nearly an instant.