Remix.run Logo
yourapostasy 2 days ago

Reminds me of a post I read a few days ago of someone crowing about an LLM writing for them an email format validator. They did not have the LLM code up an accompanying send-an-email-validation loop, and were blithely kept uninformed by the LLM of the scar tissue built up by experience in the industry on how curiously a deep rabbit hole email validation becomes.

If you’ve been around the block and are judicious how you use them, LLM’s are a really amazing productivity boost. For those without that judgement and taste, I’m seeing footguns proliferate and the LLM’s are not warning them when someone steps on the pressure plate that’s about to blow off their foot. I’m hopeful we will this year create better context window-based or recursive guardrails for the coding agents to solve for this.

sanderjd 2 days ago | parent | next [-]

Yeah I love working with Claude Code, I agree that the new models are amazing, but I spend a decent amount of time saying "wait, why are we writing that from scratch, haven't we written a library for that, or don't we have examples of using a third party library for it?".

There is probably some effective way to put this direction into the claude.md, but so far it still seems to do unnecessary reimplementation quite a lot.

Eisenstein 2 days ago | parent | prev [-]

This is a typical problem you see in autodidacts. They will recreate solutions to solved problems, trip over issues that could have been avoided, and generally do all of things you would expect someone to do if they are working with skill but no experience.

LLMs accelerate this and make it more visible, but they are not the cause. It is almost always a person trying to solve a problem and just not knowing what they don't know because they are learning as they go.

filoeleven 2 days ago | parent | next [-]

> [The cause] is almost always a person trying to solve a problem and just not knowing what they don't know because they are learning as they go.

Isn't that what "using an LLM" is supposed to solve in the first place?

kaydub a day ago | parent | next [-]

With the right prompt the LLM will solve it in the first place. But this is an issue of not knowing what you don't know, so it makes it difficult to write the right prompt. One way around this is to spawn more agents with specific tasks, or to have an agent that is ONLY focused on finding patterns/code where you're reinventing the wheel.

I often have one agent/prompt where I build things but then I have another agent/prompt where their only job is to find codesmells, bad patterns, outdated libraries, and make issues or fix these problems.

Eisenstein 2 days ago | parent | prev | next [-]

1. LLMs can't watch over someone and warn them when they are about to make a mistake

2. LLMs are obsequious

3. Even if LLMs have access to a lot of knowledge they are very bad at contextualizing it and applying it practically

I'm sure you can think of many other reasons as well.

People who are driven to learn new things and to do things are going to use whatever is available to them in order to do it. They are going to get into trouble doing that more often than not, but they aren't going to stop. No is helping the situation by sneering at them -- they are used it to it, anyway.

2 days ago | parent | prev [-]
[deleted]
yourapostasy 2 days ago | parent | prev | next [-]

I am hopeful autodidacts will leverage an LLM world like they did with an Internet search world from a library world from a printed word world. Each stage in that progression compressed the time it took for them to encompass a span of comprehension of a new body of understanding before applying to practice, expanded how much they applied the new understanding to, and deepened their adoption scope of best practices instead of reinventing the wheel.

In this regard, I see LLM's as a way for us to way more efficiently encode, compress, convey and enable operational practice our combined learned experiences. What will be really exciting is watching what happens as LLM's simultaneously draw from and contribute to those learned experiences as we do; we don't need full AGI to sharply realize massive benefits from just rapidly, recursively enabling a new highly dynamic form of our knowledge sphere that drastically shortens the distance from knowledge to deeply-nuanced praxis.

lomase 2 days ago | parent | prev [-]

My impression is that LLM users are the kind of people that HATED that their questions on StackOverflow got closed because it was duplicated.

abstractcontrol 2 days ago | parent | next [-]

> My impression is that LLM users are the kind of people that HATED that their questions on StackOverflow got closed because it was duplicated.

Lol, who doesn't hate that?

lomase 2 days ago | parent [-]

I don't know, in 40 years codding I never had to ask a question there.

sanderjd 2 days ago | parent | prev [-]

So literally everyone in the world? Yeah, seems right!

lomase 2 days ago | parent [-]

I would love to see your closed SO questions.

But don't worry, those days are over, the LLMs it is never going to push back on your ideas.

sanderjd 2 days ago | parent [-]

lol, I probably don't have any, actually. If I recall, I would just write comments when my question differed slightly from one already there.

But it's definitely the case that being able to go back and forth quickly with an LLM digging into my exact context, rather than dealing with the kind of judgy humorless attitude that was dominant on SO is hugely refreshing and way more productive!