Remix.run Logo
kaptainscarlet 4 days ago

I've also had a similar experience. I have become too lazy since I started vibe-coding. My coding has transitioned from coder to code reviewer/fixer vey quickly. Overall I feel like it's a good thing because the last few years of my life has been a repetition of frontend components and api endpoints, which to me has become too monotonous so I am happy to have AI take over that grunt work while I supervise.

latexr 3 days ago | parent | next [-]

> My coding has transitioned from coder to code reviewer/fixer vey quickly. Overall I feel like it's a good thing

Until you lose access to the LLM and find your ability has atrophied to the point you have to look up the simplest of keywords.

> the last few years of my life has been a repetition of frontend components and api endpoints, which to me has become too monotonous

It’s a surprise that so many people have this problem/complaint. Why don’t you use a snippet manager?! It’s lightweight, simple, fast, predictable, offline, and includes the best version of what you learned. We’ve had the technology for many many years.

the_real_cher 3 days ago | parent | next [-]

> Until you lose access to the LLM and find your ability has atrophied to the point you have to look up the simplest of keywords.

I never remembered those keywords to begin with.

Checkmate!

onion2k 3 days ago | parent | prev | next [-]

Until you lose access to the LLM and find your ability has atrophied to the point you have to look up the simplest of keywords.

Devs shouldn't be blindly accepting the output of an LLM. They should always be reviewing it, and only committing the code that they're happy to be accountable for. Consequently your coding and syntax knowledge can't really atrophy like that.

Algorithms and data structures on the other hand...

latexr 21 hours ago | parent [-]

> Devs shouldn't be blindly accepting the output of an LLM.

I agree, they shouldn’t. Yet they are. Not all, of course, but a large enough portion to be a problem. And it’s not just a problem for them, but everyone who has to use what they built.

TuringTest 2 days ago | parent | prev | next [-]

> Until you lose access to the LLM and find your ability has atrophied to the point you have to look up the simplest of keywords.

You can locally run pretty decent coding models such as Qwen3 Coder in a RTX 4090 GPU through LM Studio or Ollama with Cline.

It's a good idea even if they give slightly worse results in average, as you can limit your spending of expensive tokens for trivial grunt work and use them only for the really hard questions where Claude or ChatGPT 5 will excel.

latexr 21 hours ago | parent [-]

Or you could use your brain, which will actually learn and improve.

realharo 3 days ago | parent | prev [-]

>Until you lose access to the LLM and find your ability has atrophied to the point you have to look up the simplest of keywords.

Realistically, that's probably never going to happen. Expecting it is just like the prepper mindset.

DaSHacka 3 days ago | parent [-]

I imagine this is what everyone says about all SaaS services in their "burn investor money to acquire more users" phase before hitting the "enshittify and charge more for the service to become profitable" phase

stavros 4 days ago | parent | prev | next [-]

Yeah, exactly the same for me. It's tiring writing the same CRUD endpoints a thousand times, but that's how useful products are made.

foolserrandboy 4 days ago | parent [-]

I wonder why it’s not the norm to use code generation or some other form of meta programming to handle this boring repetitive work?

stavros 4 days ago | parent | next [-]

Because, like a carpenter doesn't always make the same table, but can be tired of always making tables, I don't always write the exact same CRUD endpoints, but am tired of always writing CRUD endpoints.

js8 3 days ago | parent [-]

I think your analogy shows why LLMs are useful, despite being kinda bad. We need some programming tool to which we can say, "like this CRUD endpoint, but different in this and that". Our other metaprogramming tools cannot do that, but LLMs kinda can.

I think now we have identified this problem (programmers need more abstract metaprogramming tools) and a sort of practical engineering solution (train LLM on code), it's time for researchers (in the nascent field of metaprogramming, aka applied logic) to recognize this and create some useful theories, that will help to guide this.

In my opinion, it should lead to adoption of richer (more modal and more fuzzy) logics in metaprogramming (aside from just typed lambda calculus on which our current programming languages are based). That way, we will be able to express and handle uncertainty (e.g. have a model of what constitutes a CRUD endpoint in an application) in a controlled and consistent way.

This is similar how programming is evolving from imperative with crude types into something more declarative with richer types. (Roughly, types are the specification and the code is the solution.) With a good set of fuzzy type primitives, it would be possible to define a type of "CRUD endpoint", and then answer the question if the given program has that type.

Cthulhu_ 3 days ago | parent | prev | next [-]

Because in practice the API endpoint isn't what takes up the time or LOC, but what's underneath. In fact, there's plenty of solutions to e.g. expose your database / data storage through an API directly. But that's rarely what you really want.

iterateoften 3 days ago | parent | prev [-]

Leaky abstractions. Lots of meta programming frameworks tried to do this over the years (take out as much crud as possible) but it always ends up that there is some edge case your unique program needs that isn’t handled and then it is a mess to try to hack the meta programming aspects to add what you need. Think of all the hundreds of frameworks that try to add an automatic REST API to a database table, but then you need permissions, domain specific logic, special views, etc, etc. and it ends up just easier to write it yourself.

If you can imagine an evolutionary function of noabstraction -> total abstraction oscilating overtime, the current batch of frameworks like Django and others are roughly the local maxima that was settled on. Enough to do what you need, but doesn’t do too much so its easy to customize to your use case.

therein 4 days ago | parent | prev [-]

The lazy reluctance you feel is atrophy in the making. LLMs induce that.

kaptainscarlet 3 days ago | parent [-]

That's my biggest worry, atrophy. But I will cross that bridge when I get to it.

latexr 3 days ago | parent [-]

With atrophy, by the time you get to the bridge you’ll realise it’s too deteriorated to cross and will have to spend a lot of time rebuilding and reinforcing it before you can get to the other side.

Cthulhu_ 3 days ago | parent [-]

That's it, every line of code is an implicit requirement, based on explicit requirements; when you have a codebase that needs to be maintained or replaced, as a developer it's your job to determine which of the implicit requirements in code are explicit requirements for what the application does.

I do think that in a few years time, next generation coding LLMs will read current-generation LLM generated code to improve on it. The question is whether they're smart enough to ignore the implicit requirements in the code if they aren't necessary for the explicit ones.

(this comment makes sense in my head)

Most if not all of my professional projects have been replacing existing software. In theory, they're like-for-like, feature-for-feature rewrites. In practice, there's an MVP of must-have features which usually is only a fraction of the features (implicit or explicit) of the application it replaced, with the rewrite being used as an opportunity to re-assess what is actually needed, what is bloat over time, and of course to do a redesign and re-architecture of the application.

That is, rewriting software was an exercise in extracting explicit features from an application.