Remix.run Logo
g947o 4 hours ago

None of those wild experiments are running on a "real", existing codebase that is more than 6 months old. The thing they don't talk about is that nobody outside these AI companies wants to vibe code with a 10 year old codebase with 2000 enterprise customers.

As you as you start to work with a codebase that you care about and need to seriously maintain, you'll see what a mess these agents make.

GoatInGrey 34 minutes ago | parent | next [-]

Even on codebases within the half-year age group, these LLMs often do perform nasty (read: ungodly verbose) implementations that become a maintainability nightmare. Even for the LLMs that wrote it all in the first place. I know this because we've had a steady trickle of clients and prospects expressing "challenges around maintainability and scalability" as they move toward "production readiness". Of course, asking if we can implement "better performing coding agents". As if improved harnessing or similar guardrails can solve what is in my view, a deeper problem.

The practical and opportunistic response is too tell them "Tough cookies" and watch the problems steadily compound into more lucrative revenue opportunities for us. I really have no remorse for these people. Because half of them were explicitly warned against this approach upfront but were psychologically incapable of adjusting expectations or delaying LLM deployment until the technology proved itself. If you've ever had your professional opinion dismissed by the same people regarding you as the SME, you understand my pain.

I suppose I'm just venting now. While we are now extracting money from the dumbassery, the client entitlement and management of their emotions that often comes with putting out these fires never makes for a good time.

krastanov 4 hours ago | parent | prev | next [-]

I maintain serious code bases and I use LLM agents (and agent teams) plenty -- I just happen to review the code they write, I demand they write the code in a reviewable way, and use them mostly for menial tasks that are otherwise unpleasant timesinks I have to do myself. There are many people like me, that just quietly use these tools to automate the boring chores of dealing with mature production code bases. We are quiet because this is boring day-to-day work.

E.g. I use these tools to clean up or reorganize old tests (with coverage and diff viewers checking of things I might miss), update documentation with cross links (with documentation linters checking for errors I miss), convert tests into benchmarks running as part of CI, make log file visualizers, and many more.

These tools are amazing for dealing with the long tail of boring issues that you never get to, and when used in this fashion they actually abruptly increase the quality of the codebase.

g947o 3 hours ago | parent | next [-]

It's not called vibe coding then.

jmalicki 2 hours ago | parent [-]

Oh you made vibe coding work? Well then it's not vibe coding.

But any time someone mentions using AI without proof of success? Vibe coding sucks.

GoatInGrey 28 minutes ago | parent | next [-]

No, what the other commenter described is narrowly scoped delegation to LLMs paired with manual review (which sounds dreadfully soul-sucking to me), not wholesale "write feature X, write the unit tests, and review the implementation for me". The latter is vibe-coding.

krastanov a few seconds ago | parent | next [-]

Reviewing a quick translation of a test to a benchmark (or another menial coding tasks) is way less soul-sucking than doing the menial coding by yourself. Boring soul-sucking tasks are an important thankless part of OSS maintenance.

I concur it is different from what you call vibecoding.

unshavedyak 3 minutes ago | parent | prev [-]

Sidenote, i do that frequently. I also do varying levels of review, ie more/less vibe[1]. It is soul sucking to me.

Despite being soul sucking, I do it because A: It lets me achieve goals despite lacking energy/time for projects that don't require the level of commitment or care that i provide professionally. B: it reduces how much RSI i experience. Typing is a serious concern for me these days.

To mitigate the soul sucking i've been side projecting better review tools. Which frankly i could use for work anyway, as reviewing PRs from humans could be better too. Also inline with review tools, i think a lot of soul sucking is having to provide specificity, so i hope to be able to integrate LLMs into the review tool and speak more naturally to it. Eg i belive some IDEs (vscode? no idea) can let Claude/etc see the cursor, so you can say "this code looks incorrect" without needing to be extremely specific. A suite of tooling that improves this code sharing to Claude/etc would also reduce the inane specificity that seems to be required to make LLMs even remotely reliable for me.

[1]: though we don't seem to have a term for varying amounts of vibe. Some people consider vibe to be 100% complete ignorance of the architecture/code being built. In which case imo nothing i do is vibe, which is absurd to me but i digress.

lukeschlather 36 minutes ago | parent | prev | next [-]

It's not vibe coding if you personally review all the diffs for correctness.

EnPissant an hour ago | parent | prev | next [-]

> According to Karpathy, vibe coding typically involves accepting AI-generated code without closely reviewing its internal structure, instead relying on results and follow-up prompts to guide changes.

What you are doing is by definition not vibe coding.

dingnuts an hour ago | parent | prev [-]

[dead]

peyton 4 hours ago | parent | prev [-]

Yeah esp. the latest iterations are great for stuff like “find and fix all the battery drainers.” Tests pass, everyone’s happy.

hp197 2 hours ago | parent [-]

(rhetorical question) You work at Apple? :p

JPKab 4 hours ago | parent | prev | next [-]

I work at a company with approximately $1 million in revenue per engineer and multiple 10+ year old codebases.

We use agents very aggressively, combined with beads, tons of tests, etc.

You treat them like any developer, and review the code in PRs, provide feedback, have the agents act, and merge when it's good.

We have gained tremendous velocity and have been able to tackle far more out of the backlog that we'd been forced to keep in the icebox before.

This idea of setting the bar at "agents work without code reviews" is nuts.

groundzeros2015 3 hours ago | parent [-]

Why are you using experience and authoritative framing about a technology we’ve been using for less than 6 months?

kasey_junk 3 hours ago | parent | next [-]

The person they are responding with dictated an authoritative framing that isn’t true.

I know people have emotional responses to this, but if you think people aren’t effectively using agents to ship code in lots of domains, including existing legacy code bases, you are incorrect.

Do we know exactly how to do that well, of course not, we still fruitlessly argue about how humans should write software. But there is a growing body of techniques on how to do agent first development, and a lot of those techniques are naturally converging because they work.

groundzeros2015 3 hours ago | parent [-]

I think programming effectiveness is inherently tied to the useful life of software, and we will need to see that play out.

This is not to suggest that AI tools do not have value but that “I just have agents writing code and it works great!” Has yet to hit its test.

garciasn 2 hours ago | parent [-]

The views I see often shared here are typical of those in the trenches of the tech industry: conservative.

I get it; I do. It's rapidly challenging the paradigm that we've setup over the years in a way that it's incredibly jarring, but this is going to be our new reality or you're going to be left behind in MOST industries; highly regulated industries are a different beast.

So; instead of just out-of-hand dismissing this, figure out the best ways to integrate agents into your and your teams'/companies' workstreams. It will accelerate the work and change your role from what it is today to something different; something that takes time and experience to work with.

benterix 2 hours ago | parent | next [-]

> I get it; I do. It's rapidly challenging the paradigm that we've setup over the years in a way that it's incredibly jarring,

But it's not the argument. The argument is that these tools provide lower-quality output and checking this output often takes more time than doing this work oneself. It's not that "we're conservative and afraid of changes", heck, you're talking to a crowd that used to celebrate a new JS framework every week!

There is a push to accept lower quality and to treat it as a new normal, and people who appreciate high-quality architecture and code express their concern.

thesz 2 hours ago | parent | prev | next [-]

  > It will accelerate the work and change your role from what it is today to something different;
We yet to see if different is good.

My short experience with LLM reviewing my code is that LLM's output is overly explanatory and it slows me down.

  > something that takes time and experience to work with.
So you invite us to participate in sunken cost fallacy.
groundzeros2015 2 hours ago | parent | prev [-]

I don’t doubt that companies are willing to try low quality things. They play with these processes all the time. Maybe the whole industry will try it.

I’m available for consulting when you need something done correctly.

JPKab 2 hours ago | parent | prev | next [-]

6 months?

I've been using LLMs to augment development since early December 2023. I've expanded the scope and complexity of the changes made since then as the models grew. Before beads existed, I used a folder of markdown files for externalized memory.

Just because you were late to the party doesn't mean all of us were.

2 hours ago | parent | next [-]
[deleted]
2 hours ago | parent | prev [-]
[deleted]
dboreham 2 hours ago | parent | prev [-]

If you hired a person six months ago and in that time they'd produced a ton of useful code for your product, wouldn't you say with authoritative framing that their hiring was a good decision?

groundzeros2015 2 hours ago | parent [-]

It would, but I haven’t seen that. What I’ve seen is a lot of people setting up cool agent workflows which feel very productive, but aren’t producing coherent work.

This may be a result of me using tools poorly, or more likely evaluating merits which matter less than I think. But I don’t think we can see that yet as people just invented these agent workflows and we haven’t seen it yet.

Note that the situation was not that different before LLMs. I’ve seen PMs with all the tickets setup, engineers making PRs with reviews, etc and not making progress on the product. The process can be emulated without substantive work.

rco8786 4 hours ago | parent | prev | next [-]

That is also my experience. Doesn't even have to be a 10 year old codebase. Even a 1 year old codebase. Any one that is a serious product that is deployed in production with customers who rely on it.

Not to say that there's no value in AI written code in these codebases, because there is plenty. But this whole thing where 6 agents run overnight and "tada" in the morning with production ready code is...not real.

zerkten 4 hours ago | parent [-]

I don't believe that devs are the audience. They are pushing this to decision makers where they want them to think that the state of the art is further ahead than it is. These folks then think about how helpful it'd be to have 20% of that capability. When there is so much noise in the market, and everyone seems to be overtaking everyone else it, this kind of approach is the only one that gets attention.

Similarly, a lot of the AGI-hype comments exist to expand the scope of the space. It's not real, but it helps to position products and win arguments based on hypotheticals.

pjc50 3 hours ago | parent | prev | next [-]

Also anything that doesn't look like a SaaS app does very badly. We had an internal trial at embedded firmware and concluded the results were unsalvageably bad. It doesn't help that the embedded environment is very unfriendly to standard testing techniques, as well.

JeremyNT 2 hours ago | parent | prev | next [-]

I feel like you could have correctly stated this a few months ago, but the way this is "solved" is by multiple agents that babysit each other and review their output - it's unreasonably effective.

You can get extremely good results assuming your spec is actually correct (and you're willing to chew through massive quantities of tokens / wait long enough).

ldng 2 hours ago | parent [-]

And unreasonably expensive unless you are Big Corp. Die startups, die. Welcome to our Cyberpunk overlords.

whateveracct 36 minutes ago | parent [-]

Companies will just shift money from salaries to their Anthropic bill - what's the problem?

3 hours ago | parent | prev [-]
[deleted]