Remix.run Logo
qudat 4 days ago

Wrong. I will spend 30 minutes having the LLM explain every line of code and why it's important, with context-specific follow-up questions. An LLM is one of the best ways to learn ...

Akronymus 3 days ago | parent | next [-]

So far, eqch and every time I used an LLM to help me with something it hallucinated non-existant functions or was incorrect in an important but non-obvious way.

Though, I guess I do treat LLM's as a last resort longshot for when other documentation is failing me.

naasking 3 days ago | parent | next [-]

Knowing how to use LLMs is a skill. Just winging it without any practice or exploration of how the tool fails can produce poor results.

b112 3 days ago | parent [-]

"You're holding it wrong"

99% of an LLM's usefulness vanishes, if it behaves like an addled old man.

"What's that sonny? But you said you wanted that!"

"Wait, we did that last week? Sorry let me look at this again"

"What? What do you mean, we already did this part?!"

naasking 3 days ago | parent [-]

Wrong mental model. Addled old men can't write code 1000x faster than any human.

b112 3 days ago | parent [-]

I'd prefer 1x "wrong stuff" than wrong stuff blasted 1000x. How is that helpful?

Further, they can't write code that fast, because you have to spend 1000x explaining it to them.

naasking 2 days ago | parent [-]

Except it's not 1000x wrong stuff, that's the point. But don't worry, the Amish are welcoming of new luddites!

oblio 3 days ago | parent | prev [-]

Which LLMs have you tried? Claude Code seems to be decent at not hallucinating, Gemini CLI is more eager.

I don't think current LLMs take you all the way but a powerful code generator is a useful think, just assemble guardrails and keep an eye on it.

Akronymus 3 days ago | parent [-]

Mostly chatgpt because I see 0 value in paying for any llm, nor do I wish to gice up my data to any llm provider

Anamon 3 days ago | parent | next [-]

Speaking as someone who doesn't really like or do LLM-assisted coding either: at least try Gemini. ChatGPT is the absolute worst you could use. I was quite shocked when I compared the two on the same tasks. Gemini gets decent initial results you can build on. ChatGPT generates 99% absolutely unusable rubbish. The difference is so extreme, it's not even a competition anymore.

I now understand why Altman announced "Code Red" at OpenAI. If their tools don't catch up drastically, and fast, they'll be one for the history books soon. Wouldn't be the first time the big, central early mover in a new market suddenly disappears, steamrolled by the later entrants.

oblio 3 days ago | parent | prev [-]

They work better with project context and access to tools, so yeah, the web interface is not their best foot forward.

That doesn't mean the agents are amazing, but they can be useful.

Akronymus 3 days ago | parent [-]

A simple "how do I access x in y framework in the intended way" shouldnt require any more context.

instead of telling me about z option it keeps hallucinating something that doesnt exist and even says its in the docs when it isnt.

Literally just wasting my time

oblio 3 days ago | parent [-]

I was in the same camp until a few months ago. I now think they're valid tools, like compilers. Not in the sense that everyone compares them (compilers made asm development a minuscule niche of development).

But in the sense that even today many people don't use compilers or static analysis tools. But that world is slowly shrinking.

Same for LLMs, the non LLM world will probably shrink.

You might be able to have a long and successful career without touching them for code development. Personally I'd rather check them out since tools are just tools.

_ikke_ 4 days ago | parent | prev [-]

As long as what it says is reliable and not made up.

qudat 3 days ago | parent | next [-]

That's true for internet searching. How many times have you gone to SO, seen a confident answer, tried it, and it failed to do what you needed?

Anamon 3 days ago | parent [-]

Then you write a comment, maybe even figure out the correct solution and fix the answer. If you're lucky, somebody already did. Everybody wins.

That's what LLMs take away. Nothing is given back to the community, nothing is added to shared knowledge, no differing opinions are exchanged. It just steals other people's work from a time when work was still shared and discussed, removes any indication of its source, claims it's a new thing, and gives you no way to contribute back, or even discuss it and maybe get confronted with different opinions of even discovering a better way.

Let's not forget that one of the main reasons why LLMs are useful for coding in the first place, is that they scraped SO from the time where people still used it.

anakaine 4 days ago | parent | prev [-]

I feel like we are just covering whataboutism tropes now.

You can absolutely learn from an LLM. Sometimes.documentation sucks and the LLM has learned how to put stuff together feom examples found in unusual places, and it works, and shows what the documentation failed to demonstrate.

And with the people above, I agree - sometimes the fun is in the end process, and sometimes it is just filling in the complexity we do not have time or capacity to grab. I for one just cannot keep up with front end development. Its an insurmountable nightmare of epic proportions. Im pretty skilled at my back end deep dive data and connecting APIs, however. So - AI to help put together a coherent interface over my connectors, and off we go for my side project. It doesnt need to be SOC2 compliant and OWASP proof, nor does it need ISO27001 compliance testing, because after all this is just for fun, for me.