Remix.run Logo
rglover 3 hours ago

A significant number of developers and businesses are going to have an absolutely brutal rude awakening in the not too distant future.

You can build things this way, and they may work for a time, but you don't know what you don't know (and experience teaches you that you only find most stuff by building/struggling; not sipping a soda while the AI blurts out potentially secure/stable code).

The hubris around AI is going to be hard to watch unwind. What the moment is I can't predict (nor do I care to), but there will be a shift when all of these vibe code only folks get cooked in a way that's closer to existential than benign.

Good time to be in business if you can see through the bs and understand how these systems actually function (hint: you won't have much competition soon as most people won't care until it's too late and will "price themselves out of the market").

mark242 2 hours ago | parent | next [-]

I would argue that it's going to be the opposite. At re:Invent, one of the popular sessions was in creating a trio of SRE agents, one of which did nothing but read logs and report errors, one of which did analysis of the errors and triaged and proposed fixes, and one to do the work and submit PRs to your repo.

Then, as part of the session, you would artificially introduce a bug into the system, then run into the bug in your browser. You'd see the failure happen in browser, and looking at Cloudwatch logs you'd see the error get logged.

Two minutes later, the SRE agents had the bug fixed and ready to be merged.

"understand how these systems actually function" isn't incompatible with "I didn't write most of this code". Unless you are only ever a single engineer, your career is filled with "I need to debug code I didn't write". What we have seen over the past few months is a gigantic leap in output quality, such that re-prompting happens less and less. Additionally, "after you've written this, document the logic within this markdown file" is extremely useful for your own reference and for future LLM sessions.

AWS is making a huge, huge bet on this being the future of software engineering, and even though they have their weird AWS-ish lock-in for some of the LLM-adjacent practices, it is an extremely compelling vision, and as these nondeterministic tools get more deterministic supporting functions to help their work, the quality is going to approach and probably exceed human coding quality.

dasil003 an hour ago | parent | next [-]

I agree with both you and the GP. Yes, coding is being totally revolutionized by AI, and we don't really know where the ceiling will be (though I'm skeptical we'll reach true AGI any time soon), but I believe there still an essential element of understanding how computer systems work that is required to leverage AI in a sustainable way.

There is some combination of curiosity of inner workings and precision of thought that has always been essential in becoming a successful engineer. In my very first CS 101 class I remember the professor alluding to two hurdles (pointers and recursion) which a significant portion of the class would not be able to surpass and they would change majors. Throughout the subsequent decades I saw this pattern again and again with junior engineers, bootcamp grads, etc. There are some people no matter how hard they work, they can't grok abstraction and unlock a general understanding of computing possibility.

With AI you don't need to know syntax anymore, but to write the write prompts to maintain a system and (crucially) the integrity of its data over time, you still need this understanding. I'm not sure how the AI-native generation of software engineers will develop this without writing code hands-on, but I am confident they will figure it out because I believe it to be an innate, often pedantic, thirst for understanding that some people have and some don't. This is the essential quality to succeed in software both in the past and in the future. Although vibe coding lowers the barrier to entry dramatically, there is a brick wall looming just beyond the toy app/prototype phase for anyone without a technical mindset.

athrowaway3z 17 minutes ago | parent [-]

I can see why people are skeptical devs can be 10x as productive.

But something I'd bet money on is that devs are 10x more productive at using these tools.

pragmatic 2 hours ago | parent | prev | next [-]

Now run that loop 1000 times.

What does the code /system look like.

It is going to be more like evolution (fit to environment) than engineering (fit to purpose).

It will be fascinating to watch nonetheless.

skybrian 2 hours ago | parent | next [-]

Sure, if all you ask it to do is fix bugs. You can also ask it to work on code health things like better organization, better testing, finding interesting invariants and enforcing them, and so on.

It's up to you what you want to prioritize.

xtracto 4 minutes ago | parent | next [-]

I agree but want to interject that "code organization " won't matter for long.

Programming Languages were made for people. I'm old enough to have programmed in z80 and 8086 assembler. I've been through plenty of prog.langs. through my career.

But once building systems become prompting an agent to build a flow that reads these two types of excels, cleans them,filters them, merges them and outputs the result for the web (oh and make it interactive and highly available ) .

Code won't matter. You'll have other agents that check that the system is built right, you'll have agents that test the functionality and agents that ask and propose functionality and ideas.

Most likely the Programming language will become similar to the old Telegraph texts (telegrams) which were heavily optimized for word/token count. They will be optimized to be LLM grokable instead of human grokable.

Its going to be amazing.

smashed an hour ago | parent | prev | next [-]

I have some healthy skepticism on this claim though. Maybe, but there will be a point of diminishing returns where these refactors introduce more problems than they solve and just cause more AI spending.

Code is always a liability. More code just means more problems. There has never been a code generating tool that was any good. If you can have a tool generate the code, it means you can write something on a higher level of abstraction that would not need that code to begin with.

AI can be used to write this better quality / higher level code. That's the interesting part to me. Not churning out massive amounts of code, that's a mistake.

pragmatic an hour ago | parent | prev [-]

Your assuming that scrum/agile/management won't take this over?

What stakeholder is prioritizing any of those things and paying for it out of their budget?

Code improvement projects are the White Whale of software engineering - obsessed over but rarely from a business point of view worth it.

finebalance 2 hours ago | parent | prev [-]

"evolution (fit to environment) than engineering (fit to purpose)."

Oh, I absolutely love this lens.

seba_dos1 an hour ago | parent | prev | next [-]

> Unless you are only ever a single engineer, your career is filled with "I need to debug code I didn't write".

That's the vast majority of my job and I've yet to find a way to have LLMs not be almost but not entirely useless at helping me with it.

(also, it's filled with that even when you are a single engineer)

pphysch 2 hours ago | parent | prev [-]

Automatically solving software application bugs is one thing, recovering stateful business process disasters and data corruption is entirely another thing.

Customer A is in an totally unknown database state due to a vibe-coded bug. Great, the bug is fixed now, but you're still f-ed.

kaydub a minute ago | parent | prev | next [-]

The hubris is with the devs that think like you actually.

geophile 2 hours ago | parent | prev | next [-]

The article gets at this briefly and moves on: "I can do all of this with the experience on my back of having laid the bricks, spread the mortar, cut and sewn for twenty years. If I don’t like something, I can go in, understand it and fix it as I please, instructing once and for all my setup to do what I want next time."

I think this dynamic applies to any use of AI, or indeed, any form of outsourcing. You can outsource a task effectively if you understand the complete task and its implementation very deeply. But if you don't, then you don't know if what you are getting back is correct, maintainable, scalable.

SoftTalker 2 hours ago | parent | next [-]

> instructing once and for all my setup to do what I want next time.

This works up to a point, but eventually your "setup" gets complicated, some of your demands conflict, or have different priorities, and you're relying on the AI to sort it out the way you expect.

eqvinox 2 hours ago | parent | prev [-]

> any use of AI, or indeed, any form of outsourcing

Oh that's a good analogy/categorization, I hadn't thought about it in those terms yet. AI is just the next cheaper thing down from the current southeast asian sweatshop labor.

(And you generally get what you pay for.)

FeteCommuniste 2 hours ago | parent | prev | next [-]

I don't think there's going to be any catastrophic collapse but I predict de-slopping will grow to occupy more and more developer time.

Who knows, maybe soon enough we'll have specially trained de-slopper bots, too.

HighGoldstein 2 hours ago | parent [-]

> Who knows, maybe soon enough we'll have specially trained de-slopper bots, too.

Fire, meet oil.

woeirua 2 hours ago | parent [-]

The naysayers said we’d never even get to this point. It’s far more plausible to me that AI will advance enough to de-slopify our code than it is to me that there will be some karmic reckoning in which the graybeards emerge on top again.

omnicognate an hour ago | parent [-]

What point have we reached? All I see is HN drowning in insufferable, identical-sounding posts about how everything has changed forever. Meanwhile at work, in a high stakes environment where software not working as intended has actual consequences, there are... a few new tools some people like using and think they may be a bit more productive with. And the jury's still out even on that.

The initial excitement of LLMs has significantly cooled off, the model releases show rapidly diminishing returns if not outright equilibrium and the only vibe-coded software project I've seen get any actual public use is Claude Code, which is riddled with embarrassing bugs its own developers have publicly given up on fixing. The only thing I see approaching any kind of singularity is the hype.

I think I'm done with HN at this point. It's turned into something resembling moltbook. I'll try back in a couple of years when maybe things will have changed a bit around here.

beoberha 22 minutes ago | parent | next [-]

> The initial excitement of LLMs has significantly cooled off, the model releases show rapidly diminishing returns if not outright equilibrium and the only vibe-coded software project I've seen get any actual public use is Claude Code, which is riddled with embarrassing bugs its own developers have publicly given up on fixing. The only thing I see approaching any kind of singularity is the hype.

I am absolutely baffled by this take. I work in an objectively high stakes environment (Big 3 cloud database provider) and we are finally (post Opus 4.5) seeing the models and tools become good enough to drive the vast majority of our coding work. Devops and livesite is a harder problem, but even there we see very promising results.

I was a skeptic too. I was decently vocal about AI working for single devs but could never scale to large, critical enterprise codebases and systems. I was very wrong.

sph 25 minutes ago | parent | prev | next [-]

> I think I'm done with HN at this point.

On the bright side, this forum is gonna be great fun to read in 2 or 3 years, whether the AI dream takes off, or crashes to the ground.

kuboble an hour ago | parent | prev | next [-]

I am not in a high stakes environment and work on a one-person size projects.

But for months I have almost stopped writing actual lines of code myself.

Frequency and quality of my releases had improved. I got very good feedback on those releases from my customer base, and the number of bugs reported is not larger than on a code written by me personally.

The only downside is that I do not know the code inside out anymore even if i read it all, it feels like a code written by co-worker.

pengaru an hour ago | parent | prev [-]

It's no coincidence HN is hosted by a VC. VC-backed tech is all about boom-bust hype cycles analogous to the lever pull of a giant slot machine.

divbzero 11 minutes ago | parent | prev | next [-]

An HN post earlier this week declared that “AI is killing B2B SaaS”:

https://news.ycombinator.com/item?id=46888441

Developers and businesses with that attitude could experience a similarly rude awakening.

harrisi an hour ago | parent | prev | next [-]

The aspect of "potentially secure/stable code" is very interesting to me. There's an enormous amount of code that aren't secure or stable already (I'd argue virtually all of the code in existence).

This has already been a problem. There's no real ramifications for it. Even for something like Cloudflare stopping a significant amount of Internet traffic for any amount of time is not (as far as I know) investigated in an independent way. There's nobody that is potentially facing charges. However, with other civil engineering endeavors, there absolutely is. Regular checks, government agencies to audit systems, penalties for causing harm, etc. are expected in those areas.

LLM-generated code is the continuation of the bastardization of software "engineering." Now the situation is not only that nobody is accountable, but a black box cluster of computers is not even reasonably accountable. If someone makes a tragic mistake today, it can be understood who caused it. If "Cloudflare2" comes about which is all (or significantly) generated, whoever is in charge can just throw their hands up and say "hey, I don't know why it did this, and the people that made the system that made this mistake don't know why it did this." It has been and will continue to be very concerning.

feastingonslop 43 minutes ago | parent [-]

Nobody is saying to skip testing the software. Testing is still important. What the code itself looks like, isn’t.

fennecbutt 13 minutes ago | parent | prev | next [-]

Business has been operating on a management/executive culture for many decades now.

These people get paid millions a year to fly around and shake hands with people aka shit fuck all.

At times in the past I have worked on projects that were rushed out and didn't do a single thing that they were intended to do.

And you know what management's response was? They loved that shit. Ooooh it looks do good, that's so cool, well done. Management circle jerking each other, as if using everyone else's shafts as handles to climb the rungs of the ladder.

It's just...like it kills me that this thing I love, technology/engineering/programming...things that are responsible for many of the best things present in our modern lives, have both been twisted to create some of the worst things in our modern lives in the pursuit of profit. And the people in charge? They don't even care if it works or not, they just want that undeserved promotion for a job that a Simpsons-esque fucking drinking bird is capable of.

I just want to go back to the mid 2000s. ;~;

giancarlostoro 2 hours ago | parent | prev | next [-]

I find that instructing AI to use frameworks yields better results and sets you up for a better outcome.

I use Claude Code with both Django and React which its surprisingly good with. I rather use software thats tried and tested. The only time I let it write its own is when I want ultra minimal CSS.

kenjackson 2 hours ago | parent [-]

This. For area where you can use tested and tried libraries (or tools in general) LLMs will generate better code when they use them.

In fact, LLMs will be better than humans in learning new frameworks. It could end up being the opposite that frameworks and libraries become more important with LLMs.

nottorp an hour ago | parent | next [-]

> In fact, LLMs will be better than humans in learning new frameworks.

LLMs don't learn? The neural networks are trained just once before release and it's a -ing expensive process.

Have you tried using one on your existing code base, which is basically a framework for whatever business problem you're solving? Did it figure it out automagically?

They know react.js and nest.js and next.js and whatever.js because they had humans correct them and billions of lines of public code to train on.

giancarlostoro 28 minutes ago | parent [-]

If its on github eventually it will cycle into the training data. I have also seen Claude pull down code to look at from github.

fauigerzigerk 4 minutes ago | parent [-]

Wouldn't there be a chicken and egg problem once humans stop writing new code directly? Who would write the code using this new framework? Are the examples written by the creators of the framework enough to train an AI?

eqvinox 2 hours ago | parent | prev | next [-]

> LLMs will be better than humans in learning new frameworks.

I don't see a base for that assumption. They're good at things like Django because there is a metric fuckton of existing open-source code out there that they can be trained on. They're already not great at less popular or even fringe frameworks and programming languages. What makes you think they'll be good at a new thing that there are almost no open resources for yet?

catlifeonmars an hour ago | parent | prev | next [-]

LLMs famously aren’t that good at using new frameworks/languages. Sure they can get by with the right context, but most people are pointing them at standard frameworks in common languages to maximize the quality of their output.

tappio 23 minutes ago | parent [-]

This is not my experience any longer. With properly set feedback loop and frameworks documentation it does not seem to matter much if they are working with completely novel stuff or not. Of course, when that is not available they hallucinate, but who anymore does that even? Anyone can see that LLMs are just glorified auto-complete machines, so you really have to put a lot of work in the enviroment they operate and quick feedback loops. (Just like with 90% of developers made of flesh...)

lenkite 2 hours ago | parent | prev [-]

How will LLM's become better than humans in learning new frameworks when automated/vibe coders never manually code how to use those new frameworks ?

tenthirtyam an hour ago | parent | prev | next [-]

My expectation is that there'll never be a single bust-up moment, no line-in-the-sand beyond which we'll be able to say "it doesn't work anymore."

Instead agent written code will get more and more complex, requiring more and more tokens (& NPU/GPU/RAM) to create/review/debug/modify, and will rapidly pass beyond any hope of a human understanding even for relatively simple projects (e.g. such as a banking app on your phone).

I wonder, however, whether the complexity will grow slower or faster than Moore's law and our collective ability to feed the AIs.

layer8 43 minutes ago | parent [-]

Maybe software systems will become more like biological organisms. Huge complexity with parts bordering on chaos, but still working reasonably well most of the time, until entropy takes its course.

MrDarcy 2 hours ago | parent | prev | next [-]

This comment ignores the key insight of the article. Design is what matters most now. Design is the difference between vibe coding and software engineering.

Given a good design, software engineers today are 100x more productive. What they produce is high quality due to the design. Production is fast and cheap due to the agents.

You are correct, there will be a reckoning for large scale systems which are vibe coded. They author is also correct, well designed systems no longer need frameworks or vendors, and they are unlikely to fail because they were well designed from the start.

goostavos 2 hours ago | parent [-]

>software engineers today are 100x more productive

Somebody needs to explain to my lying eyes where these 100xers are hiding. They seem to live in comments on the internet, but I'm not seeing the teams around me increase their output by two orders of magnitude.

MrDarcy an hour ago | parent [-]

They are the people who have the design sense of someone like Rob Pike but lack his coding skill. These people are now 100x more capable than they were previously.

devsda 32 minutes ago | parent | next [-]

This is how you get managers saying

"we have taken latest AI subscription. We expect you to be able to increase productivity and complete 5/10/100 stories per sprint from now on instead of one per sprint that we planned previously".

seabrookmx 8 minutes ago | parent | prev | next [-]

Citation needed. For both the existence of said people (how do you develop said design sense without a ton of coding experience?) and that they are 100x more productive.

vips7L an hour ago | parent | prev [-]

No they’re not.

drcode 2 hours ago | parent | prev | next [-]

I'm no fan of AI in terms of its long term consequences, but being able to "just do things" with the aid of AI tools, diving head first into the most difficult programming projects, is going to improve the human programming skills worldwide to levels never before imaginable

pragmatic 2 hours ago | parent [-]

How would it improve skills?

Does driving a car improve your running speed?

drcode an hour ago | parent | next [-]

I have to stretch your analogy in weird ways to make it function within this discussion:

Imagine two people who have only sat in a chair their whole lives. Then, you have one of them learn how to drive a car, whereas the other one never leaves the chair.

The one who learned how to drive a car would then find it easier to learn how to run, compared to the person who had to continue sitting in the chair the whole time.

FeteCommuniste an hour ago | parent | prev | next [-]

I've found AI handy as a sort of tutor sometimes, like "I want to do X in Y programming language, what are some tools / libraries I could use for that?" And it will give multiple suggestions, often along with examples, that are pretty close to what I need.

literalAardvark an hour ago | parent | prev [-]

No, but it does improve your ability to get to classes after work

straydusk an hour ago | parent | prev | next [-]

Have you considered that betting against the models and ecosystem improving might be a bad bet, and you might be the one who is in for a rude awakening?

squidbeak an hour ago | parent [-]

I agree. We've been assured by these skeptics that models are stochastic parrots, that progress in developing them was stalling, and that skills parity with senior developers was impossible - as well as having to listen to a type of self-indulgent daydreaming relish about the eventual catastrophes companies adopting them would face. And perhaps eventually these skeptics will turn out to be right. Who knows at this stage. But at this stage, what we're seeing is just the opposite: significant progress in model development last year, patterns for use being explored by almost every development team without widespread calamity and the first well-functioning automated workflows appearing for replacing entire teams. At this stage, I'd bet on the skeptics being the camp to eventually be forced to make the hard adjustments.

bdcravens 3 hours ago | parent | prev | next [-]

You still "find most stuff by building/struggling". You just move up stack.

> there will be a shift when all of these vibe code only folks get cooked in a way that's closer to existential than benign

For those who are "vibe code only", perhaps. But it's no different than the "coding bootcamp only" developers who never really learned to think holistically. Or the folks who learned the bare minimum to get those sweet dotcom boom dollars back in the day, and then had to return to selling cars when it call came crashing down.

The winners have been, and will always be, those who can think bigger. The ones today who already know how to build from scratch but then find the superpower is in architecture, not syntax, and suddenly find themselves 10x more productive.

bthornbury an hour ago | parent | prev | next [-]

Why does there seem to be such a divide in opinions on AI in coding? Meanwhile those who "get it" have been improving their productivity for literally years now.

paulhebert 33 minutes ago | parent [-]

I think there are a number of elements:

- What you are working on. AI is better at solving already solved problems with lots of examples.

- How fast/skilled you were before. If you were slow before then you got a bigger speed up. If AI can solve problems you can’t you unlock new abilities

- How much quality is prioritized. You can write quality, bug free code with AI but it takes longer and you get less of a boost.

- How much time you spend coding. If a lot of your job is design/architecture/planning/research then speeding up code generation matters less

- How much you like coding. If you like coding then using AI is less fun. If you didn’t like coding then you get to skip a chore

- How much you care about deeply understanding systems

- How much you care about externalities: power usage, data theft, job loss, etc.

- How much boilerplate you were writing before

I’m sure that’s not a complete list but they are a few things I’ve seen as dividers

paulhebert 22 minutes ago | parent [-]

A few more:

- How much do you prioritize speed?

- Do you have a big backlog of dev tasks ready to go?

- What are the risks if your software doesn’t work?

- Are you working on a green field or legacy project? Prototypes or MVPs?

paulhebert 11 minutes ago | parent [-]

- Do you prefer working as a manager or an individual contributor? Are you used to owning the code or managing others who write codd?

markus_zhang 2 hours ago | parent | prev | next [-]

But by then many of us are already starved. That’s why I always said that engineers should NOT integrate AI with internal data.

wouldbecouldbe an hour ago | parent | prev | next [-]

Yeah I completely disagree with the author actually, but also with you.

The frameworks are what make the AI write easily understandable code. I let it run nextjs with an ORM, and it almost always creates very well defined api routes, classes & data models. etter then I would do often,

I also ask it be way more correct on the validation & error handling then I would ever do. It makes mistakes, I shout at it and corrects quickly.

So the project I've been "vibe coding" have a much better codebase then I used to have on my solo projects.

rugPool an hour ago | parent | prev | next [-]

Back in the 00s people like you were saying "no one will put their private data in the cloud!"

"I am sick of articles about the cloud!"

"Anyone know of message boards where discussing cloud compute is banned?"

"Businesses will not trust the cloud!"

Aside from logistics of food and medicine, most economic activity is ephemeral wank.

It's memes. It's a myth. Allegory.

These systems are electrical state in machines and they can be optimized at the hardware layer.

Your Python or Ruby or whatever you ship 9,000 layers of state and abstraction above the OS running in the data center has little influence on how these systems actually function.

To borrow from poker; software engineers were being handed their hat years ago. It's already too late.

redleggedfrog 2 hours ago | parent | prev | next [-]

The future is already here. Been working a few years at a subsidiary of a large corporation where the entire hierarchy of companies is pushing AI hard, at different levels of complexity, from office work up through software development. Regular company meetings across companies and divisions to discuss methods and progress. Overall not a bad strategy and it's paying dividends.

A experiment was tried on a large and very intractable code-base of C++, Visual Basic, classic .asp, and SQL Server, with three different reporting systems attached to it. The reporting systems were crazy being controlled by giant XML files with complex namespaces and no-nos like the order of the nodes mattering. It had been maintained by offshore developers for maybe 10 years or more. The application was originally created over 25 years ago. They wanted to replace it with modern technology, but they estimated it'd take 7 years(!). So they just threw a team at it and said, "Just use prompts to AI and hand code minimally and see how far you get."

And they did wonderfully (and this is before the latest Claude improvements and agents) and they managed to create a minimal replacement in just two months (two or maybe three developers full time I think was the level of effort). This was touted at a meeting and given the approval for further development. At the meeting I specifically asked, "You only maintain this with prompts?" "Yes," they said, "we just iterate through repeated prompts to refine the code."

It has all mostly been abandoned a few months later. Parts of it are being reused, attempting a kind of "work in from the edges" approach to replacing parts of the system, but mostly it's dead.

We are yet to have a postmortem on this whole thing, but I've talked to the developers, and they essentially made a different intractable problem of repeated prompting breaking existing features when attempting to apply fixes or add features. And breaking in really subtle and hard to discern ways. The AI created unit tests didn't often find these bugs, either. They really tried a lot of angles trying to sort it out - complex .md files, breaking up the monolith to make the AI have less context to track, gross simplification of existing features, and so on. These are smarty-pants developers, too, people who know their stuff, got better than BS's, and they themselves were at first surprised at their success, then not so surprised later at the eventual result.

There was also a cost angle that became intractable. Coding like that was expensive. There was a lot of hand-wringing from managers over how much it was costing in "tokens" and whatever else. I pointed out if it's less cost than 7 years of development you're ahead of the game, which they pointed out it would be a cost spread over 7 years, not in 1 year. I'm not an accountant, but apparently that makes a difference.

I don't necessarily consider it a failed experiment, because we all learned a lot about how to better do our software development with AI. They swung for the fences but just got a double.

Of course this will all get better, but I wonder if it'll ever get there like we envision, with the Star Trek, "Computer, made me a sandwich," method of software development. The takeaway from all this is you still have to "know your code" for things that are non-trivial, and really, you can go a few steps above non-trivial. You can go a long way not looking to close at the LLM output, but there is a point at which it starts to be friction.

As a side note, not really related to the OP, but the UI cooked up by the LLMs was an interesting "card" looking kind of thing, actually pretty nice to look at and use. Then, when searching for a wiki for the Ball x Pit game, I noticed that some of the wikis very closely resembled the UI for the application. Now I see variations of it all over the internet. I wonder if the LLMs "converge" on a particular UI if not given specific instructions?

pragmatic an hour ago | parent | next [-]

These are the blog posts we need.

This is the siren song of llm. "Look how much progress we made"

Effort increases as time to completion decreases. The last 10% of the project takes 90% of the effort as you try to finish up, deploy,integrate and find the gaps.

Llms are woefully incapable of that as that knowledge doesn't exist in a markdown file. It's in people's heads and you have to pry it out with a crowbar or as happens to so many projects, they get released and no one uses it.

See Google et Al. "We failed to find market fit on the 15th iteration of our chat app, we'll do better next time"

nottorp an hour ago | parent | prev | next [-]

I've noticed this in my small scale tests. Basically the larger the prompt gets (and it includes all the previously generated code because that's what you want to add features to), the more likely is that the LLM will go off the rails. Or forget the beginning of the context. Or go into a loop.

Now if you're using a lot of separate prompts where you draw from whatever the network was trained on and not from code that's in the prompt, you can get usable stuff out of it. But that won't build you the whole application.

sonofhans an hour ago | parent | prev [-]

In a veritable ocean of opinions it is excellent to see a detailed, first-hand report. Many thanks!

cookiengineer an hour ago | parent | prev | next [-]

Come to the redteam / purpleteam side. We're having fun times right now. The definition of "every software has bugs" is now on a next level, because people don't even care about sql injection anymore. It's right built into every vibecoded codebase.

Authentication and authorization is as simple as POST /api/create/admin with zero checks. Pretty much every API ever slop coded looks like this. And if it doesn't, it will forget about security checks two prompts later and reverse the previously working checks.

simianuuords 2 hours ago | parent | prev | next [-]

[dead]

nojito 3 hours ago | parent | prev [-]

>A significant number of developers and businesses are going to have an absolutely brutal rude awakening in the not too distant future.

Correct. Those who wave away coding agents and refuse to engrain them into their workflows are going to be left behind in the dust.

HighGoldstein 2 hours ago | parent | next [-]

> Correct. Those who wave away AI and refuse to engrain it into their workflows are going to be left behind in the dust.

Similar to those who waved away crypto and are now left behind in the dust, yes?

literalAardvark an hour ago | parent | next [-]

Might not be the best counter example since everyone who has bought BTC before Jan 2024 is now in massive profit.

superze 2 hours ago | parent | prev | next [-]

You forgot NFTs

FeteCommuniste an hour ago | parent [-]

Remember when the geniuses at Andreessen Horowitz were dumping hundreds of millions into the "metaverse?"

pawelduda 2 hours ago | parent | prev [-]

I think Bitcoin and major cryptos outperformed a lot of assets over the last decade, so you could say it left some people behind in the dust, yes

LunaSea an hour ago | parent [-]

Like being ratioed with a 50% price crash?

literalAardvark an hour ago | parent | next [-]

You mean just like META, NFLX, AMZN, TSLA, NVDA, CSCO, MSFT, GE, BAC ?

pawelduda 38 minutes ago | parent | prev [-]

I can tell you what a decade is but I'll have to leave the reading comprehension to you

htuibxtuhidb 2 hours ago | parent | prev | next [-]

[dead]

otabdeveloper4 2 hours ago | parent | prev [-]

Doubt on that. AI usually only wastes time and produces bugs.

> bbut you're holding it wrong, just two more prompts and three more agents and it will be a real boy

So, you invented an IDE, except more opaque and expensive? Welcome to the club.

verdverm 2 hours ago | parent [-]

You both are likely incorrect, the answer lies in the middle rather than the extremes

redleggedfrog 2 hours ago | parent [-]

This is not just software development wisdom, it's life wisdom.