Remix.run Logo
Rooster61 7 hours ago

I can't relate that much to this. Every time I use AI to write code, I'm constantly fighting a feeling on the back of my neck that I need to look over everything it has done and supplement/alter it with my own code. That ick feeling counteracts the dopamine hit of having a working app after a few minutes of vibe coding, and I don't think that's going anywhere anytime soon.

That said, I have experience. I could absolutely see myself falling into this as a junior or even mid level dev. I'd no doubt not feel that feeling on my neck if it wasn't scarred from code review lashings early in my career by knowledgeable mentors.

ryandrake 7 hours ago | parent | next [-]

In my experience, Claude only knows how to spew code. Every problem you want it to solve, it translates into "more code" rather than "less code". You have to very closely code review everything it does, otherwise your codebase is going to just grow and grow, and asymptotically approach 100% debt.

I code review everything that Claude produces, and I'd estimate about 90-95% of the time, my reaction is WOW it works but too much code dude, let's take 3 hours to handhold you through simplifying it until nothing more can be removed.

godelski 2 minutes ago | parent | next [-]

  > let's take 3 hours to handhold you through simplifying it until nothing more can be removed.
This is why I'm unconvinced that AI code makes me faster. Sure, I could produce a million lines an hour but are we running a sprint or a marathon? I don't know about you but I can't sprint a marathon.

I think much of the world of software has become incredibly myopic. I get it, it's a lot harder to win a war than it is to win a battle but just usually taking the easy way out is just deferring the costs to your future. Problem is that those costs accrue interest... Personally? I'm lazy and a cheapskate.

When did programmers stop becoming lazy and start becoming lazy? More importantly, why?

notarobot123 6 hours ago | parent | prev | next [-]

At this point, it's worth asking whether lots of relatively straightforward verbose code is actually significantly worse than the least code necessary for the problem. Obviously, architecture matters. What might matter less is verbosity.

The reason we aimed for minimal "accidental complexity" up to now was directly related to the cost/pain of changing and maintaining that code. Hasn't the economics of maintenance and change shifted so much that accidental complexity isn't actually all that expensive/painful?

I think a bit of refactoring, renaming and restructuring has been helpful for maintainability but recently I've been a little less inclined to worry about the easy readability of function bodies and fine implementation details. It still feels wrong but I can't justify the effort anymore.

torben-friis 4 hours ago | parent | next [-]

>Isn't the economics of maintenance and change shifted so much that accidental complexity isn't actually all that expensive/painful?

Not while context windows cause decay and larger bills.

The AI's max cognitive load C is larger than a human's, but if codebase size grows unbounded the minimum context needed for a change will eventually surpass C.

It is also a bad idea to let your codebase become only readable by a machine when we are still in the dark about the role machines and people will take in the future. What if you have to go back to manual dev in a now gargantuan codebase?

bartread 28 minutes ago | parent | prev | next [-]

A problem I’ve found is that when you’re adding functionality or refactoring it often leaves unused methods or types behind, at least with multiple devs working on the same codebase.

This unused code gets further modified as time goes on: new functionality is wired in, or it gets further refactored. Usually it’ll still have tests that cover it. It gives the impression of being live code, but it’s not: it’s zombified.

So you get situations where it gets wired up to something and then that something doesn’t work and you wonder why and so you start digging about and you discover it’s because it has been wired into a path that is never executed.

The fog of relatively recent changes sometimes makes it hard to figure out if the code should be unused or if someone just forgot to hook it in as part of a bigger piece of work. Then you find nobody else is really sure either.

So that extra complexity comes at a cost. It can slow you down or trip you up; catch you by surprise.

binary0010 22 minutes ago | parent | prev | next [-]

I don't think people are talking about the least code possible, just not incredibly verbose and inefficient like what you get by default from llms.

For example I have a game I've been working on for a few years, I do stuff like "implement this simple psuedo physics system to make the bot follow the character like so...etc"

After some planning and back and forth.

It returns mostly working code a little odd on some edge case.

But as I've hand coded this thing for years. I could easily look at it. Laugh my ass off, it had multiple classes and around 1k lines of code, all kinds of crazy non performant crap.

The exact thing I needed, I reprogrammed in around 5 lines of very simple code that did exactly what I needed with no edge case weirdness.

Now the vibe coders actually ship that shit. I like to read vibe code games now and again, and there is no possible way those guys are ever shipping a real game, as every single decision is verbose along with the worst performance decisions over and over everywhere.

Sure it can get you some cute little toy projects, but it will absolutely fall apart if you are trying to make real games.

Don't know about saas apps or whatever. Maybe that stuff doesn't matter at all.

davebren 3 hours ago | parent | prev | next [-]

I'm been in a community that makes a lot of cognitive training software. There's some core open source projects that were created without LLMs, but new projects are now mostly created by young people vibe-coding from scratch or forking and modifying the existing projects with an LLM.

The answer to your question is really obvious. The high-effort manually coded projects stick around and the low-effort vibe-coded projects are forgotten about quickly. In the end LLM-driven programming is always going to bring you to a dead-end. There's certain things where I can predict that they're going to fail because it's going to involve certain kinds of complexity they can't and will never be able to deal with. The code gets so bad that even if an expert programmer wanted to make changes it either wouldn't be possible or worth it. A lot of the time the vibecoders are so high off the low-effort sense of empowerment that they don't even realize what they made is completely broken.

Well written software has staying power because it can be understood and built upon. Understanding a problem deeply enough to devise an elegant solution even leads to new possibilities and ideas that will never be conceived with a more superficial understanding.

Trasmatta 5 hours ago | parent | prev | next [-]

> Hasn't the economics of maintenance and change shifted so much that accidental complexity isn't actually all that expensive/painful?

I sincerely believe that extensive accidental complexity will ALSO be bad for AI agents. Their quality will diminish as their context windows get filled up with endless amounts of spaghetti and accidental complexity. I feel like we won't fully start feeling those effects for another year or so.

panflute 4 hours ago | parent [-]

True, yet they have a Moore's Law like growth going for properties like their context windows.. I think the larger problem with letting them be verbose is Occam's razor. The more verbose they are the more variant behavior they will have where any variation that is not strictly necessary is likely to include incorrect behavior.

4 hours ago | parent [-]
[deleted]
fatata123 37 minutes ago | parent | prev [-]

[dead]

joebates 6 hours ago | parent | prev | next [-]

Same. Luckily I enjoy the process of refactoring and deleting code is nearly arousing, so I get the initial dopamine rush of wow this works, followed by the dopamine rush of "wow now this is cleaner and works so much better". Keeps me in touch with the codebase too.

pixelready 6 hours ago | parent | next [-]

Pruning code is to software engineers what cancelling plans is to introverts :)

I think I need to work up a Claude skill named marie-kondo, so that when it breathlessly presents its triumphant solution, I can go “yes, but does it spark joy?” And have it go into an aggressive refactor loop with me.

fragmede 3 hours ago | parent [-]

Sounded like fun so had Claude do one up here: https://github.com/fragmede/marie-kondo-ai-skill

suzzer99 6 hours ago | parent | prev [-]

I question any dev who doesn't get aroused by deleting code.

I just removed an entire graphql endpoint - 500 lines of front and back-end code. I may need to be hosed down.

scubbo 2 hours ago | parent | next [-]

`$JOB` recently introduced the `#red-diffs` Slack channel. I just submitted ` +4 / -28,742`. Pretty proud.

devinprater 4 hours ago | parent | prev | next [-]

Oh my. I may not want to know what selecting all and then pressing Delete would do to you.

fragmede 4 hours ago | parent | prev [-]

Get a room!

denkmoon 25 minutes ago | parent | prev | next [-]

>asymptotically approach 100% debt

What to do if you're just one dev in an org of 50? Who are all pushing more and more code every PR? I'm gonna have to leave aren't I :(

runeb 5 hours ago | parent | prev | next [-]

A particularly pronounced version of this can often be seen by letting 2 agents review and code in a loop. One agent will find some problems with the code, the other agent will address the review by adding more code.

A good human developer might see that the better way to address the review is to backtrack and pick a different approach. The ai agents seem more prone to getting stuck down bad branches of the decision tree.

layoric 2 hours ago | parent | prev | next [-]

This was a large part of my problem with Claude code, it is far too eager to get to the code writing. Matt Pocock's skills and Codex I have found to work together quite well. You still have to ensure design/architecture is being followed, and review carefully obviously, but Codex by default seems to look for minimal change approach a lot more than Claude does/ever did.

borski an hour ago | parent | prev | next [-]

You can also tell it to specifically focus on removing unnecessary code as a pass, and it does that pretty well.

joquarky 29 minutes ago | parent [-]

I do one of these occasionally and it usually finds redundancies and/or inconsistencies to clean up. It's very effective and should be part of any process involving agentic coding.

ddesotto 5 hours ago | parent | prev | next [-]

I think this is more a by product of the way these models are architected. “One more token” i usually much more likely than a “STOP”. Knowing when to stop and doing more with less is something also very hard for human developers.

For me what throws me off most of the time is the structure on the mid-level. It usually makes sense in the loc and maybe project level, but on the file and folder level it just loses reference on what it already has or what it does not need to be too verbose about.

ok_dad 3 hours ago | parent | prev | next [-]

Hey that’s my exact experience. I started coding the interfaces by hand which helps with the architecture but you still have to say, “don’t add a bunch of helpers and stuff, stick to filling in the stubs.”

Then I only have to spend one hour handholding the clanker to get it perfect. I usually do a lot of manual refactoring as well during that time.

tailscaler2026 6 hours ago | parent | prev | next [-]

Of course it writes a lot of code. It gets paid per token. That's guaranteed future income every additional line of technical debt.

HoldOnAMinute 6 hours ago | parent | next [-]

Periodically you can also ask it to review the recent changes and see if there is a risk-free way to streamline them.

You can also tell it to periodically summarize the "lessons learned" from the recent session(s)

embedding-shape 6 hours ago | parent | prev | next [-]

Then local models shouldn't suffer from the same problems, but they do. They just aren't trained in the direction of "less code == better long-term maintainability" I'd say, rather than some grand "increased-token-usage" conspiracy.

You can certainly steer them a bit to reduce the issue parent talks about, but they still go into that direction whenever they can, adding stuff on top of stuff, piling hacks/shim on top of other hacks/shims, just like many human developers :)

bonesss 6 hours ago | parent [-]

Training data is the masses of code from everyone.

Restrict that data to just the best of the best, the tersest of the tersest, and we’d see better output. I don’t think people are sharing that kinda stuff (Jane Street’s gems stay locked up), and even if they did my presumption is that it’d be too narrow and demanding for general audiences.

Big hopes for the long future, damned to some degree of mediocrity in the near term mass product.

layer8 5 hours ago | parent | prev | next [-]

At some point they’ll introduce “deletion” tokens that cost ten times the regular token price. ;)

enraged_camel 5 hours ago | parent | prev [-]

>> Of course it writes a lot of code. It gets paid per token.

I don't buy it. I think a much more likely reason it leans towards adding code is because deleting code carries inherent risk: it can break things in major ways or minor ways or very visibly or invisibly. Adding new code, on the other hand, is a lot safer: the only parts that can break are those the AI touched inside its own working context. So it doesn't have to go down rabbit holes and potentially create bigger and bigger messes.

fhub an hour ago | parent | prev | next [-]

I'm curious how much you have tuned your CLAUDE.md file. You can get very specific and direct about what your expectations/desires are. You can also have another agent do a critical review with your expectations/desires and feed that back.

joquarky 25 minutes ago | parent [-]

Just be careful to not put too much in there or it won't have enough attention left over for the tasks.

Look at the doc hub pattern if your {agent}.md file is getting more than ~100 lines.

HoldOnAMinute 6 hours ago | parent | prev | next [-]

Here's what I do

Tell it "Do not change any files yet, just listen." Then we discuss the problem. Then I have it write to a file it's understanding of the change.

I review that carefully. Then I let it implement. I approve each change after manually looking at it. I already know what it should be doing.

Make smaller changes and check each one carefully before and after.

dvfjsdhgfv 6 hours ago | parent [-]

This is a reasonable approach but has nothing to do with what is being pushed on us from all sides.

wccrawford 6 hours ago | parent | prev | next [-]

I haven't used Claude, just Sweep, Copilot and whatever Jetbrains has. But they've definitely deleted code, not just added it. I know, because they have deleted code that I definitely still needed, and I had to reject those changes and start over on the prompt.

joquarky 39 minutes ago | parent | prev | next [-]

It really does want to make everything overcomplicated.

I end most of my pre-plan prompts with "KISS - Keep it simple" to keep it mostly under control.

I also keep each file under 1000 lines and do a full scan of code and docs for cruft every 20-30 task cycles.

Been working on the same project for six montha and glad to say there is minimal bloat.

operatingthetan 6 hours ago | parent | prev | next [-]

A lot of people seem to think if you give the agent a framework and clear plans that it spews "good" code. I doubt it though.

fragmede 4 hours ago | parent | prev [-]

Try codex.

embedding-shape 7 hours ago | parent | prev | next [-]

> after a few minutes of vibe coding

Don't vibe-code, it's a joke someone coined in the moment, that somehow the industry decided shouldn't be a joke, and some people think it's a feasible way of developing stuff, it's not.

Find a better way of working together with agent, where you get the review what's important to be reviewed by a human, and "outsource" the rest, and you'll end up with code and a design that works the way you'd program it yourself, you just get there faster. I probably end up reviewing maybe 90% of the code that the agent writes, but still it's a hell of a lot more pleasant writing/dictating a few prompts over typing tens of thousands of characters and constantly moving between files. Maybe I'm just tired of typing...

Xmd5a 6 hours ago | parent | next [-]

I've been thinking of using Kiczales's Systematic Program Design [0]. Write the skeleton. Let the IA fill in the blanks.

[0] https://news.ycombinator.com/item?id=16563160

redmaple892 4 hours ago | parent | next [-]

I'm curious about how people link ideas and remember them. If you don't mind sharing, what was your process to save and remember this particular post from 2018?

wrout 5 hours ago | parent | prev [-]

great point, i've found the htdp/recursion scheme approach works quite well, even when using some smaller local models... i loved taking prof. kiczales course, the materials are publicly available, still: - https://cs110.students.cs.ubc.ca - https://cs110.students.cs.ubc.ca/admin/links.html

ActorNightly 10 minutes ago | parent | prev | next [-]

Vibe coding is fine. Its just the next step.

Python for example is vibe coding compared to C. You pip install some library and just use it. Wanna modify a class instance variable and not use the proper accessor function? Sure, go right ahead.

The big thing about vibe coding is, as ironic as it sounds, prompt engineering. You can have tons of slop, but if it works, it works. The key defining factor is what constitutes as working. Namely, defining input output contracts, and automatic checks.

wahnfrieden 7 hours ago | parent | prev [-]

There are tasks where it is appropriate to vibe code

embedding-shape 6 hours ago | parent [-]

Agreed, whenever you're 99% sure you'll throw away the code afterwards.

da_chicken 5 hours ago | parent | next [-]

Yeah, the problem is that "code you're sure to throw away" includes school coursework.

That's always been one of the problems, though. Writing code for class is much less stressful than writing code that other people will rely on.

legulere 5 hours ago | parent | prev | next [-]

I let Claude translate a horribly written vb program writing some xml data into a pdf form. Most of the code I didn't even read until much later, I just checked the end result. The code won't be touched again, and if it will simply be replaced. Some code is foundational and you should put a lot of effort into it, a lot of code isn't though.

Other than that agentic coding has not really been working that well for me at our main codebase though.

wahnfrieden 6 hours ago | parent | prev [-]

Internal/personal tooling, marketing automations, etc. tend to afford it without needing to throw it out after. These are also cases where you can simply rewrite later without having to address a mountain of debt.

If you do this work for a wage and are nearly fully alienated from the value of your labor, I understand the distaste for applying it in any circumstance. You'll care more for your personal experience of the work: how informed you appear when reporting on it to your colleagues, how your boss/colleagues will judge you when an issue arises, how much you feel you are learning from the work, how frustrating it feels to return to items at the behest of others, etc. Vibe coding in these circumstances is unpleasant.

embedding-shape 6 hours ago | parent [-]

I care about building programs that work, do their thing well and are easy to change today and in the future :) I'm not sure where you're extrapolating the rest from, vibe-coding simply isn't for long-lasting software, you need to actually be involved then. Don't get me wrong, most of the code I "produce" today is written by LLMs/agents, but almost none of it is "vibe-coded".

Personal tooling especially, since you want to be able to just do small changes over long periods of time, it's important it makes sense when you come back to it, even if you forgotten all about it since your last change.

stolen_biscuit 7 hours ago | parent | prev | next [-]

Fully agree. I supplement my game development with AI. Anything novel or interesting I want to do, I need to write the code for myself, otherwise I'm in for a world of hurt. But for the drudgery work that is necessary to invest a lot of time in but boring to actually write, I design a clear architecture and ask AI to do the implementation leg-work. And still you have to go back over and make sure it didn't decide to just create something outlandish. A good recent example is Codex trying to recreate from scratch the behaviour already provided by Area2D in a game I'm making with Godot.

If you try and get AI to do anything meaningful, it will be riddled with footguns and bizarre choices. Maybe if you have hundreds of dollars worth of tokens that might not be the case - but for someone who spends $10 a month, it's just not worth the headache.

Besides, for me these are hobby projects and writing code is still fun, I just make AI write the boring parts (good examples: saving and loading, parsing of data files and settings menu functionality) - but I keep it away from anything that needs a humans judgement to create.

promptunit 6 hours ago | parent | next [-]

[flagged]

clownpenis_fart 4 hours ago | parent | prev [-]

[dead]

steezeburger 7 hours ago | parent | prev | next [-]

Experience is so so valuable right now. We can guide these agents super well, but I do fear for the juniors as you said. I would like to think I'd use the agents to dive deeper and learn faster. It was pretty rough piecing together solutions from Stack Overflow, various irc channels, Reddit, etc. But also, I cheated on my homework in college and didn't really review the answers, so not sure. Though I pursued programming out of interest and not just to complete a degree. Maybe it would have been different. In any case, I'm glad I came into the LLM era with a lot of experience and failures already.

usefulcat 3 hours ago | parent | next [-]

> Experience is so so valuable right now.

I think traditional coding experience will be a lot more valuable in 5-10 years, given the apparent inverse relationship between that and LLM usage, and the number of people who seem to already be heavily reliant on LLMs today.

The next killer app on the scale of today's LLMs could be an LLM (or call it whatever) that can un-spaghettify the reams of code that are currently being generated by LLMs.

sarreph 6 hours ago | parent | prev | next [-]

I think this is one of the key takes right now. I too have similar experience.

Which way is it going to go?

i) “Seniors” also get superseded by even more capable models that can do all of the things which currently require experience.

ii) Linguistics become the new higher order abstraction (English is the new high-level programming language) _but_ there are different / orthogonal ways of approaching software development than the way we do things now — which “juniors” become more adept at more quickly.

bigstrat2003 6 hours ago | parent [-]

There's also iii) people realize that if the LLM needs that much babysitting, it doesn't actually add value. So they don't use it very much because it is too limited as a tool.

shigawire 6 hours ago | parent | prev | next [-]

I don't think "cheating" is the right way to frame it.

A junior has managers pushing them to do more, faster. You review the code but do you really understand it the same as if you struggled through it? Do you ever build the muscle memory of what works and what doesn't?

It is the thought process that builds skills. I've seen some projects trying to be deliberate about learning from the agent as it writes to code - but I'm not sure there is a substitute for struggling and learning by doing.

svachalek 6 hours ago | parent | prev | next [-]

When the chainsaw fails the juniors, they're going to be adding wood chippers and stump grinders. The seniors are going to be out there chipping artisanal wood blocks with a hatchet. You don't need a lot of history to see who you really need to be worried about.

whattheheckheck 6 hours ago | parent [-]

Its not the internet that needs convincing, its the ones writing the checks

nomel 6 hours ago | parent | prev | next [-]

> Experience is so so valuable right now.

And probably the least valued it has ever been.

hparadiz 7 hours ago | parent | prev [-]

Metrics, profilers, architecture! Use AI to get back to basics! Wanna prove a technique is better? Use AI to make a benchmark! Learn by experimentation! That is my advice to juniors. At the end of the day AI is writing code and there may be 10 different ways to run something. Only one is the fastest in any given use case.

steezeburger 7 hours ago | parent | next [-]

Yeah I totally agree! I also think people should still be reading as much code as they can. That's always been true imo. It is just hard to keep up with it now because of how much code an LLM can generate for $20/month. I do think we'll move to higher abstractions of course. We won't have to understand code as much as how the systems and components are architected. It would also be nice to use our new efficiency to return to producing truly optimized and fast software.

chowells 7 hours ago | parent | prev [-]

Fastest is usually the wrong metric. But you'd need experience to know that, I suppose...

steezeburger 7 hours ago | parent | next [-]

I think it's just the wrong metric to optimize for _first_. It's not generally a bad metric to keep tabs on though. Make it work, make it right, make it fast? Or something like that.

mikepurvis 7 hours ago | parent | prev [-]

But the point is that LLMs giving you 10x the potential code output doesn't have to mean 10x the code committed. It can also be "let's look at all three possible implementations in more detail and decide which is really the best fit for our situation, and commit that one."

That's still 2-3x the velocity, but you get a better result because you went deeper on the paths-not-taken when designing.

shimman 7 hours ago | parent [-]

[dead]

svachalek 6 hours ago | parent | prev | next [-]

I'm a very senior dev (32 years exp) but I've got the process nailed down tight enough with .md documents, skills, review agents, etc, that I don't typically have that feeling or any need to do anything extra.

I don't think this makes me dumb though, I've just moved up stack. Rather than caring about assembly language or source code, I'm focused on requirements, architectural decisions, engineering process, and ever more automation.

skydhash 21 minutes ago | parent | next [-]

Are you the one coding a fix if there's a bug in production? If not, then you do not have the process nailed down.

guelo 6 hours ago | parent | prev [-]

Every engineer to manager has the same thought but after a few years they can barely code.

cedws 6 hours ago | parent | prev | next [-]

The code that LLMs produce is just average IMO. I wouldn’t call myself an authority on clean code but I can tell when code is well structured. I prefer my hand written code over Claude or GPT’s every time. I once did an experiment where I generated a spec from a project I’d already written, then had an LLM blindly reimplement it from the spec, and compared code. The LLM’s version looked like vomit.

therealdrag0 5 hours ago | parent [-]

Agree, however in some cases avg code is good enough, especially when refactoring it is just a little attention and more tokens.

98codes 3 hours ago | parent | prev | next [-]

As someone now firmly in the "used to be a developer" part of my career, AI use seems like good old atrophying of the memory muscle of coding. Whether it's because you just don't code anymore at all, or offload that thinking to AI, the effect on you is the same. You start to forget.

Don't have AI do anything you want to stay sharp on.

movpasd 6 hours ago | parent | prev | next [-]

I feel the same way, and yet I would still say I feel AI usage atrophying my thinking skills. I feel less tempted to use it to shortcut whole files, but even just using it to speed up looking up and carefully reading docs, tinkering with a library to understand it when docs are inadequate, working out the tradeoffs for design decisions... These sound less objectionable and more like simplr speedups, but when I _do_ need to do it (because the agent refuses to do it properly) I can feel the friction so much more keenly. Whether that's just me losing the habit of those specific tasks, or a generalised loss of g-factor, I don't know.

2snakes 3 hours ago | parent [-]

Maybe like a certain blurriness to the edges of those task’s schema? Like becoming a manager or reviewing an intern’s work?

dclowd9901 6 hours ago | parent | prev | next [-]

I've been using it mostly to bat away yak shaving rabbit holes one can get into when working on a large and complex project. I work mostly on platform work, which is generally nebulous in its feedback loop and testing. Relegating AI to refactoring and building tools to help me research keeps me focused on solving the actual main problem I'm trying to solve, reduces context switching. I really don't understand people who use it to bat out their main focus. I simply don't trust it at that level.

davnicwil 6 hours ago | parent | prev | next [-]

I think this instinct is intrinsic, and comes from really caring about detail and wanting to fully understand it and own it.

That's what drives it, and I don't really think the extrinsic things about the way you learned (while helpful) have that much bearing on it. It comes from you and you should take credit for it.

I think if you were learning today you'd probably find have the same feeling and do just fine because of it.

zackify 7 hours ago | parent | prev | next [-]

I agree with your sentiment. I've been trying to get from plan -> complete with AI and it's been working very well in a sandbox.

I am trying super hard to give the tools to validate everything to AI.

I finish by opening a draft PR and then I go through doing a deep review myself.

If I didn't already have 10+ years experience, it would be hard to learn and not atrophy with easy shortcuts being so available.

You still need people who know stuff in detail and can own the code... for now

gchamonlive 7 hours ago | parent | prev | next [-]

> I'm constantly fighting a feeling on the back of my neck that I need to look over everything it has done and supplement/alter it with my own code

Can relate but the only thing I do different is I teach AI how to cleanup after herself in followup prompts, sessions and refining AGENTS.md. Static code quality analysis tools are also really good to keep the agent on its toes.

SarikayaKomzin 7 hours ago | parent | prev | next [-]

I have the same feeling on the back of my neck. I think it’s born from my crippling imposter syndrome, which is maybe a super power now.

randusername 6 hours ago | parent | prev | next [-]

I really enjoy having the AI write the spec then I write the code.

Reviewing code is pain, reviewing requirements and giving feedback feels more productive. I have to confront the full shape of the problem and I usually discover a few cans of worms that make me rethink my approach.

dualvariable 6 hours ago | parent [-]

Yeah, I'll talk out design with AI in a brainstorming session.

Then I'll usually go and implement at least one piece of that. If I get stuck, I'll ask for some help. Then, once I'm happy with it, I'll ask the AI to review what I came up with. Then typically ask it to stamp the pattern around the codebase. And often to just iterate through writing out unit tests.

So I just did this for getting dense output from interpolants for an ODE integrator that I maintain. I did the work to make Tsit5 work by hand. I asked AI to stamp out the same pattern for DP5 and BS3, because it was just gene splicing those changes into a very similar RK integrator. I can review the diffs and see that it faithfully stamped out the same pattern with two prompts and a couple of minutes.

I'm still maintaining pretty strong contact with the codebase by doing a lot of my own programming, and fighting with the design while I'm writing that first piece of it, but then I use the AI to stamp out the mindlessly repetitive stuff.

That just seemed like the obvious way to me to go about programming with AI rather than pure-vibecoding and never touching anything other than prompts.

Also, you probably run out of tokens a lot faster if you're pure-vibecoding.

Plus you should spend some time debugging your own code. Even if AI could find and fix a bug in a minute or three that would take you 20 minutes, it is generally going to be better for you to burn that 20 minutes on trying to fix it before asking for help.

Of course, unlike another poster in this comment thread, I never cheated in college and spent a lot of time on "academic" side projects that weren't part of any course I was taking.

Once the vibecoders and cheats are done spamming a billion lines of AI generated code into industry, there's probably going to be positions for people who can (with AI assistance) sort out the mess and get production stable again.

aerodexis 6 hours ago | parent | prev | next [-]

Reading and writing are related, but separate activities. One's capability to write code can degrade independently of one's ability to review it.

collingreen 6 hours ago | parent | prev | next [-]

Learning from code review lashings is amazing in its effectiveness and minimal blast radius! I'm glad you were able to take that in the easy way.

Scar tissue from production going down and staying down is probably powering those code reviews and I think will be teaching this wave of vibe projects a few hard lessons. I've had to learn a few things the hard way like this and it's as effective as it is painful.

I'm very pro ai-generated-software in the right context. I think being able to vibe out software as needed is awesome and could finally unlock the potential of our computer and data dominated world. I also think we haven't yet learned as a culture where this new thing is different from traditional software and misunderstanding that is where a lot of the pain will be felt.

epolanski 6 hours ago | parent | prev | next [-]

The thing is that you seem to have that luxury to be able to dig more into the problem and scratch that itch.

But the industry is changing around you fast.

If MIT-bred devs were already building crap in faang before, the trend has been getting nothing short of worse across the industry.

Expectations are rising, the field is becoming a rat race of which engineer can output the most mediocre/acceptable/good enough amount of features in the least time as possible.

Let me make this clear: you're in an increasingly rarer bubble where you have a luxury that is disappearing in this industry, plain and simple.

I have the fortune of having stellar devs around me, people that contributed to projects and software you use every day.

They are also outputting magnitude of order more than they ever did, and none of them is getting genuinely better at the craft, but it is what it is.

cyanydeez 6 hours ago | parent | prev | next [-]

I'm using a local model. The code gen is never fast beyond the first few context. As the context grows, it slows down. It's basically it's own self limiting process. When it starts doing things, the threshold of lethargy lowers and triggers me to 'do it myself'; especially, I've developed the understanding of knowledge where it starts doing stupid things and that's valuable.

There must be an epistemic problem with just how fast these SOTA models run. I don't think it's just that my local model is dumber, I think it's more that the speed of token gen trains my brain with different expectations. There's no way it'll just generate hundreds of files by itself. When it can via a opencode loop with thought files, letting it run for a day is the only way you get that.

onlyrealcuzzo 6 hours ago | parent | prev [-]

> I'm constantly fighting a feeling on the back of my neck that I need to look over everything it has done and supplement/alter it with my own code.

On the flip side, I'm working on stuff FAR more challenging than I would ever be able to do on my own.

My brain is melting because I can barely keep up with learning how to figure out if I'm even doing what I'm trying to do.

AI might be making me a worse coder, but I don't care. If it hasn't "solved" coding now, I'm pretty confident it will long before my career is over. I don't have a job because I can write code - that's a small part of it. I have a job because I can get things to work. Anyone can code things that don't work (especially AI).

AI is certainly making me a far better overall engineer. Instead of spending my time trying to make the compiler happy (or fixing dynamic type errors at runtime), I can spend my time trying to solve substantially harder problems that I would never even dare try without an entire team to back me up (i.e. never).

Coding - imo - is VERY low on the totem pole of engineering skills.

I don't care if the function is pretty. I care if the system is upholding invariants and performing as expected, and there's adequate testing in place to PROVE to me that it ACTUALLY works.

High performance concurrent code has always blurred the line between sorcery and arcana... Go didn't really solve that. Rust/Tokio didn't. Zig didn't. C certainly hasn't.

It might be easier to prove to yourself, if you're the one doing all the writing, but at the end of the day, code is rarely just for you...

You probably should have the same level of proof whether you wrote it yourself and just trust yourself bro, or whether a Chinese Room wrote if for you.

I feel like I'm living in a Brave New World, and - at least for the time being - I'm enjoying it, even if it feels like I'm sprinting as fast as I can and still unable to keep up.

ferngodfather 6 hours ago | parent [-]

> My brain is melting because I can barely keep up with learning how to figure out if I'm even doing what I'm trying to do.

This is not a good thing. You should understand what your code does. Writing code nobody can understand is not a flex.

onlyrealcuzzo 6 hours ago | parent [-]

> You should understand what your code does.

It is not hard to understand what a line of code does...

It is hard to keep up with solving the problem I'm trying to solve...