Remix.run Logo
martin-t 15 hours ago

> programmer who actually do like the actual typing

It's not about the typing, it's about the understanding.

LLM coding is like reading a math textbook without trying to solve any of the problems. You get an overview, you get a sense of what it's about and most importantly you get a false sense of understanding.

But if you try to actually solve the problems, you engage completely different parts of your brain. It's about the self-improvement.

embedding-shape 15 hours ago | parent | next [-]

> It's not about the typing, it's about the understanding.

Well, it's both, for different people, seemingly :)

I also like the understanding and solving something difficult, that rewards a really strong part of my brain. But I don't always like to spend 5 hours in doing so, especially when I'm doing that because of some other problem I want to solve. Then I just want it solved ideally.

But then other days I engage in problems that are hard because they are hard, and because I want to spend 5 hours thinking about, designing the perfect solution for it and so on.

Different moments call for different methods, and particularly people seem to widely favor different methods too, which makes sense.

ben_w 14 hours ago | parent | prev | next [-]

> LLM coding is like reading a math textbook without trying to solve any of the problems. You get an overview, you get a sense of what it's about and most importantly you get a false sense of understanding.

Can be, but… well, the analogy can go wrong both ways.

This is what Brilliant.org and Duolingo sell themselves on: solve problems to learn.

Before I moved to Berlin in 2018, I had turned the whole Duolingo German tree gold more than once, when I arrived I was essentially tourist-level.

Brilliant.org, I did as much as I could before the questions got too hard (latter half of group theory, relativity, vector calculus, that kind of thing); I've looked at it again since then, and get the impressions the new questions they added were the same kind of thing that ultimately turned me off Duolingo, easier questions that teach little, padding out a progressions system that can only be worked through fast enough to learn anything if you pay a lot.

Code… even before LLMs, I've seen and I've worked with confident people with a false sense of understanding about the code they wrote. (Unfortunately for me, one of my weaknesses is the politics of navigating such people).

habinero 11 hours ago | parent [-]

Yeah, there's a big difference between edutainment like Brilliant and Duolingo and actually studying a topic.

I'm not trying to be snobbish here, it's completely fine to enjoy those sorts of products (I consume a lot of pop science, which I put in the same category) but you gotta actually get your hands dirty and do the work.

It's also fine to not want to do that -- I love to doodle and have a reasonable eye for drawing, but to get really good at it, I'd have to practice a lot and develop better technique and skills and make a lot of shitty art and ehhhh. I don't want it badly enough.

williamcotton 15 hours ago | parent | prev | next [-]

Lately I've been writing DSLs with the help of these LLM assistants. It is definitely not vibe coding as I'm paying a lot of attention to the overall architecture. But most importantly my focus is on the expressiveness and usefulness of the DSLs themselves. I am indeed solving problems and I am very engaged but it is a very different focus. "How can the LSP help orient the developer?" "Do we want to encourage a functional-looking pipeline in this context"? "How should the step debugger operate under these conditions"? etc.

  GET /svg/weather
    |> jq: weatherData
    |> jq: `
      .hourly as $h |
      [$h.time, $h.temperature_2m] | transpose | map({time: .[0], temp: .[1]})
    `
    |> gg({ "type": "svg", "width": 800, "height": 400 }): `
      aes(x: time, y: temp) 
        | line() 
        | point()
    `
I've even started embedding my DSLs inside my other DSLs!
svara 15 hours ago | parent | prev | next [-]

We've been hearing this a lot, but I don't really get it. A lot of code, most probably, isn't even close to being as challenging as a maths textbook.

It obviously depends a lot on what exactly you're building, but in many projects programming entails a lot of low intellectual effort, repetitive work.

It's the same things over and over with slight variations and little intellectual challenge once you've learnt the basic concepts.

Many projects do have a kernel of non-obvious innovation, some have a lot of it, and by all means, do think deeply about these parts. That's your job.

But if an LLM can do the clerical work for you? What's not to celebrate about that?

To make it concrete with an example: the other day I had Claude make a TUI for a data processing library I made. It's a bunch of rather tedious boilerplate.

I really have no intellectual interest in TUI coding and I would consider doing that myself a terrible use of my time considering all the other things I could be doing.

The alternative wasn't to have a much better TUI, but to not have any.

zahlman 15 hours ago | parent | next [-]

> It obviously depends a lot on what exactly you're building, but in many projects programming entails a lot of low intellectual effort, repetitive work.

I think I can reasonably describe myself as one of the people telling you the thing you don't really get.

And from my perspective: we hate those projects and only do them if/because they pay well.

> the other day I had Claude make a TUI for a data processing library I made. It's a bunch of rather tedious boilerplate. I really have no intellectual interest in TUI coding...

From my perspective, the core concepts in a TUI event loop are cool, and making one only involves boilerplate insofar as the support libraries you use expect it. And when I encounter that, I naturally add "design a better API for this" to my project list.

Historically, a large part of avoiding the tedium has been making a clearer separation between the expressive code-like things and the repetitive data-like things, to the point where the data-like things can be purely automated or outsourced. AI feels weird because it blurs the line of what can or cannot be automated, at the expense of determinism.

nkrisc 15 hours ago | parent | prev | next [-]

And so in the future if you want to add a feature, either the LLM can do it correctly or the feature doesn’t get added? How long will that work as the TUI code base grows?

simonw 14 hours ago | parent [-]

At that point you change your attitude to the project and start treating it like something you care about, take control of the architecture, rewrite bits that don't make sense, etc.

Plus the size of project that an LLM can help maintain keeps growing. I actually think that size may no longer have any realistic limits at all now: the tricks Claude Code uses today with grep and sub-agents mean there's no longer a realistic upper limit to how much code it can help manage, even with Opus's relatively small (by today's standards) 200,000 token limit.

zahlman 10 hours ago | parent [-]

The problem I'm anticipating isn't so much "the codebase grows beyond the agent-system's comprehension" so much as "the agent-system doesn't care about good architecture" (at least unless it's explicitly directed to). So the codebase grows beyond the codebase's natural size when things are redundantly rewritten and stuffed into inappropriate places, or ill-fitting architectural patterns are aped.

svara 9 hours ago | parent [-]

Don't "vibe code". If you don't know what architecture the LLM is producing, you will produce slop.

martin-t 15 hours ago | parent | prev [-]

I've also been hearing variations of your comment a lot too and correct me if I am wrong but I think they always implicitly assume that LLMs are more useful for the low-intellectual stuff than solving the high-intellectual core of the problem.

The thing is:

1) A lot of the low-intellectual stuff is not necessarily repetitive, it involved some business logic which is a culmination of knowing the process behind what the uses needs. When you write a prompt, the model makes assumptions which are not necessarily correct for the particular situation. Writing the code yourself forced you to notice the decision points and make more informed choices.

I understand your TUI example and it's better than having none now, but as a result anybody who wants to write "a much better TUI" now faces a higher barrier to entry since a) it's harder to justify an incremental improvement which takes a lot of work b) users will already have processes around the current system c) anybody who wrote a similar library with a better TUI is now competing with you and quality is a much smaller factor than hype/awareness/advertisement.

We'll basically have more but lower quality SW and I am not sure that's an improvement long term.

2) A lot of the high-intellectual stuff ironically can be solved by LLMs because a similar problem is already in the training data, maybe in another language, maybe with slight differences which can be pattern matched by the LLM. It's laundering other people's work and you don't even get to focus on the interesting parts.

svara 14 hours ago | parent [-]

> but I think they always implicitly assume that LLMs are more useful for the low-intellectual stuff than solving the high-intellectual core of the problem.

Yes, this follows from the point the GP was making.

The LLM can produce code for complex problems, but that doesn't save you as much time, because in those cases typing it out isn't the bottleneck, understanding it in detail is.

jebarker 12 hours ago | parent | prev [-]

> LLM coding is like reading a math textbook without trying to solve any of the problems.

Most math textbooks provide the solutions too. So you could choose to just read those and move on and you’d have achieved much less. The same is true with coding. Just because LLMs are available doesn’t mean you have to use them for all coding, especially when the goal is to learn foundational knowledge. I still believe there’s a need for humans to learn much of the same foundational knowledge as before LLMs otherwise we’ll end up with a world of technology that is totally inscrutable. Those who choose to just vibe code everything will make themselves irrelevant quickly.

dehsge 11 hours ago | parent | next [-]

Most math books do not provide solutions. Outside of calculus, advanced mathematics solutions are left as an exercise for the reader.

jebarker 11 hours ago | parent [-]

The ones I used for the first couple of years of my math PhD had solutions. That's a sufficient level of "advanced" to be applicable in this analogy. It doesn't really matter though - the point still stands that _if_ solutions are available you don't have to use them and doing so will hurt your learning of foundational knowledge.

11 hours ago | parent | prev | next [-]
[deleted]
gosub100 11 hours ago | parent | prev [-]

I haven't used AI yet but I definitely would love a tool that could do the drudgery for me for designs that I already understand. For instance, if I want to store my own structures in an RDBMS, I want to lay the groundwork and say "Hey Jeeves, give me the C++ syntax to commit this structure to a MySQL table using commit/rollback". I believe once I know what I want, futzing over the exact syntax for how to do it is a waste of time. I heard c++ isn't well supported but eventually I'll give it a try.