Remix.run Logo
zmmmmm 3 hours ago

I think AI rescue consulting is going to be come a significant mode of high value consulting, similar to specialists who come in to try and deal with a security breach or do data recovery.

Purely AI written systems will scale to a point of complexity that no human can ever understand and the defect close rate will taper down and the token burn per defect rate scale up and eventually AI changes will cause on average more defects than they close and the whole system will be unstable. It will become a special kind of process to clean room out such a mess and rebuild it fresh (probably still with AI) after distilling out core design principles to avoid catastrophic breakdown.

Somewhere in the future, the new software engineering will be primarily about principles to avoid this in the first, place but it will take us 20 years to learn them, just like original software eng took a lot longer than expected to reach a stable set of design principles (and people still argue about them!).

leoc 3 hours ago | parent | next [-]

> Purely AI written systems will scale to a point of complexity that no human can ever understand and the defect close rate will taper down and the token burn per defect rate scale up and eventually AI changes will cause on average more defects than they close and the whole system will be unstable.

Wow, it’s true, AI really is set to match human performance on large, complex software systems! ;)

jimbokun 2 hours ago | parent | next [-]

Humans who have been writing systems like that for many years know how to maintain and modify them successfully. It’s just that our industry has a bias towards youth who don’t think they have anything to learn from those who came before them.

monkpit 26 minutes ago | parent | next [-]

> Humans who have been writing systems like that for many years know how to maintain and modify them successfully.

Do they??

jplusequalt 12 minutes ago | parent [-]

I believe this type of person exists.

My team lead has worked on the same software for 30 years. He has the ability to hear me discuss a bug I noticed, and then pinpoint not only the likely culprit, but the exact function that's causing it.

DougN7 5 minutes ago | parent [-]

I do the same thing in a project I’ve worked on for 25 years. I’ve had mediocre at best results with AI. It’s useful to discuss concepts with, but the code never handles the nuances of the edge cases.

ttoinou 2 hours ago | parent | prev | next [-]

How do you explain to a junior this pile of messy code isn’t crap but is actually years of integrated knowledge ? That the most common principles discussed in computer science (OOP, SOLID, DRY etc.) are actually just little guides that aren’t to be taken to the extremes ?

rented_mule an hour ago | parent | next [-]

Here's a 26-year old post on the exact topic of messiness you raise:

https://www.joelonsoftware.com/2000/04/06/things-you-should-...

A decade ago, I was sitting in on a meeting about a rewrite and, before I could say anything, someone in the first year of her career asked why anyone thought a rewrite would be any cleaner once all the edge cases were handled. Afterwards, I asked her where she learned this. She said "I don't know, it just seems kind of obvious." She went on to be a great engineer and is now a great manager.

tudelo 13 minutes ago | parent [-]

The bolded quote "It’s harder to read code than to write it." is hilarious given todays context... it has only become more true :)

Yokohiii an hour ago | parent | prev | next [-]

It's a dice roll to keep the junior around until he unlearns the wrong bits.

e9 an hour ago | parent | prev | next [-]

Expert knows when to break the rules

ethbr1 44 minutes ago | parent [-]

Experts take the time to learn why the fence was there in the first place.

josephg 18 minutes ago | parent | next [-]

Experts are people who have made all the mistakes there are to make in their chosen field.

Including all of the above.

TedDoesntTalk 21 minutes ago | parent | prev [-]

Experts have beginner’s mind.

micromacrofoot an hour ago | parent | prev [-]

tell them they need to turn a profit as quickly as possible

ttoinou an hour ago | parent [-]

Wait if they can do that they’re not juniors anymore :P

kiba 21 minutes ago | parent | prev [-]

Executive leadership bias older not younger, no?

whateveracct 18 minutes ago | parent | prev | next [-]

it's been 10y and i still haven't seen a human system that bad

maybe some that people said were that bad. but they just needed some elbow grease. remember, it takes guts to be amazing!

cindyllm 11 minutes ago | parent [-]

[dead]

detritus 2 hours ago | parent | prev [-]

The origin of 'dark DNA' begins to make more sense through this sort of lens, except the system somehow maintained a level of compensation to fix all its flaws.

elictronic 2 hours ago | parent [-]

We do as well, it's called bankruptcy. Not every company survives but in the end the ones that do are more resilient.

ramoz 3 hours ago | parent | prev | next [-]

A non-technical friend of mine has just won some hospital contracts after vibecoding w/ Claude an inventory management solution for them. They gave him access to IT dept servers and he called me extremely lost on how to deploy (cant connect Claude to them) and also frustrated because the app has some sort of interesting data/state issues.

_HMCB_ 2 minutes ago | parent | next [-]

Heaven help us.

jeremyjh 3 hours ago | parent | prev | next [-]

What concerns me about this is that as these stories multiply and circulate people will just completely stop buying software/SAAS from startups, because 90% or more will be this same thing. It will completely kill the market.

pjc50 3 hours ago | parent | next [-]

Oracle have routinely had multimillion pound contract failures and people keep buying from them. Big vendors are too big to fail.

jeremyjh 3 hours ago | parent | next [-]

Those are custom software or heavily customized implementations of ERP and similar systems for very large organizations. I’m talking more about the SMB market where today it’s possible for a small team to carve out a niche and make a nice living or even bootstrap a venture that competes with a large player that has poor UX or antiquated feature designs.

The reason Oracle can continue failing at those massive projects is simple: everyone fails at them routinely and often it’s the customers fault.

tosti 3 hours ago | parent | prev [-]

Same with Deloitte

the13 3 hours ago | parent [-]

no one's getting fired for hiring either one.

billywhizz 3 hours ago | parent | prev | next [-]

> It will completely kill the market.

it will kill all the people in that hospital too

rcoveson 2 hours ago | parent [-]

What is this, Humanitarian News?

salawat an hour ago | parent | next [-]

The real Hackers were the ones actually trying to minimize suffering all along. Not reproduce it at scale.

ryandrake 42 minutes ago | parent [-]

But the Torment Nexus is such an interesting technical challenge! and I don’t personally torment people: I just move protobufs around! - Software Engineer #1 and #2 excuses

jatora an hour ago | parent | prev [-]

thankyou

jameshart 2 hours ago | parent | prev | next [-]

I mean, the stories about how stuff was getting built in the late 90s/early 2000s aren’t much worse.

jatora an hour ago | parent | prev | next [-]

[flagged]

slopinthebag 3 hours ago | parent | prev [-]

Or you end up with a certification process, which will of course introduce it's own problems but startups doing things the right way and not just "moveing fast and breaking things" can thrive.

ofjcihen an hour ago | parent | prev | next [-]

As a cybersecurity IR professional as much as I hate to see this happen to a hospital this kind of thing is responsible for essentially tripling my income over the last 3 years.

linkregister 3 hours ago | parent | prev | next [-]

This hospital will learn some hard lessons. I hope their backup strategy is good. I'm surprised they can field software from an entity that isn't SOC2 & HIPAA certified.

GolfPopper an hour ago | parent | next [-]

No worries! At worst, the contractor can just tell Claude to make sure the hospital knows they're appropriately certified. And the hospital can use Claude to make sure the certs are valid. Everybody wins, except the ones who end up dead. Or with their health destroyed.

ethbr1 40 minutes ago | parent | prev [-]

> from an entity that isn't SOC2 & HIPAA certified

What do you think the fake Delve attestation scandal was about? https://news.ycombinator.com/item?id=47444319

AlexCoventry 2 hours ago | parent | prev | next [-]

Have you tried to talk him out of it, and have you considered blowing the whistle on him? He could kill people!

3form 2 hours ago | parent | prev | next [-]

Wow. This is like every other gold rush. Millions will walk into the ice and snow, somehow not questioning that their ability to dig is not unique.

mikestorrent an hour ago | parent [-]

Well, selling shovels has always been a good way to deal with that problem

TheGrassyKnoll 25 minutes ago | parent [-]

The shovel sellers are ringing the cash register.

EasyMark 2 hours ago | parent | prev | next [-]

This is going to happen all over. Company I'm currently contracting with has gone AI everything (aka technical debt hell), and they're gonna suffer for it. I'm glad my consulting contract ends in 2 months. I don't want to be around for the crash

yumraj 2 hours ago | parent | prev | next [-]

Don't help him. Let him figure it out by himself, else they (he and hospital) will never learn.

technion 2 hours ago | parent | next [-]

A hospital could not learn a bigger lesson from this person than their existing big players.

(Screams in "deployed in 2026 a new product that only works in internet explorer" in healthcare).

ramoz 2 hours ago | parent | prev | next [-]

I don't have time for that. I just told him he needs to hire somebody

tacostakohashi 2 hours ago | parent | prev [-]

Or, "help" by asking questions, or otherwise by sharing an AI review/analysis/suggestions, since they're into that kind of thing.

Definitely cleaning up other people's AI mess for them for free is not a good use of time.

jimbokun 2 hours ago | parent | prev | next [-]

I hope you have quoted him a very very high hourly rate.

paulryanrogers an hour ago | parent | prev | next [-]

Did he lie about HIPAA compliance?

jcgrillo 3 hours ago | parent | prev [-]

jfc lmao

blipvert 3 hours ago | parent | prev | next [-]

Reminds me of the quote in the original Westworld movie:

“ These are highly complicated pieces of equipment… almost as complicated as living organisms.

In some cases, they’ve been designed by other computers.

We don’t know exactly how they work.”

Now how did that work out ;-)

singlow 3 hours ago | parent [-]

However Michael Crichton imagined it would.

blipvert 3 hours ago | parent [-]

I guess that “well” wouldn’t have sold many books.

delichon 3 hours ago | parent | next [-]

Shelve it with the Jurassic Park version where John Hammond builds a safe, profitable theme park, and The Andromeda Strain that gives people the sniffles.

thaumasiotes 3 hours ago | parent | prev [-]

That depends. If this equipment is part of the plot, you're right. If it's part of the premise of the world, "well" would be the expectation.

abhiyerra 3 hours ago | parent | prev | next [-]

Heh. Got a customer recently around this. Entire infrastructure and CI/CD vibecoded. They half implemented Kubernetes in Github Actions that were several thousand lines long and impossible to understand.

I think the problem will get worst. I dislike the marketing around AI, but I do think it is a useful tool to help those who have experience move faster. If you are not an expert, AI seems to create a complex solution to whatever it is you were trying to do.

ethbr1 28 minutes ago | parent [-]

> If you are not an expert, AI seems to create a complex solution to whatever it is you were trying to do.

I've been watching non-developers vibe code stuff, and the general failure mode seems to be ignorance of 3-pick-2 tradeoffs.

They'll spam "make it more reliable" or some such, and AI will best-effort add more intermediary redis caches or similar patterns.

But because the vibe coders don't actually know what a redis cache is or how it works, they'll never make the architectural trade-offs to truly fix things.

danbolt 8 minutes ago | parent [-]

I’ve noticed something similar with vibecoded game rendering logic submitted by peers. Sometimes it will be peppered with extraneous checks for nullptr, or early returns on textures that have zero size.

I often wonder if it’s the statistical nature of the LLM mixed with a request in the prompt.

fooker 3 hours ago | parent | prev | next [-]

This might not pan out to be the glorious victory of human craft as you’re imagining it to be.

Here’s a slightly different future - these AI rescue consultants are bots too, just trained for this purpose.

Plausible?

I have already experienced claude 4.7 handle pretty complex refactors without issues. Scale and correctness aren’t even 1% of the issue it was last year. You just have to get the high level design right, or explicitly ask it critique your design before building it.

malfist 3 hours ago | parent | next [-]

> You just have to get the high level design right, or explicitly ask it critique your design before building it.

Do you think people are not giving their agents specs and asking for input?

djeastm 21 minutes ago | parent | next [-]

Maybe the professional devs, but not the vibecoders

literalAardvark an hour ago | parent | prev | next [-]

The ones who end up with messes, no

fooker 3 hours ago | parent | prev [-]

Very often, no.

dullcrisp 2 hours ago | parent | prev | next [-]

And the bots training the bots are just bots that were trained to train bots?

fooker 2 hours ago | parent [-]

Nothing that sexy, just thirty odd years of software engineering data from humans.

Commits, design reviews, whitepapers, code reviews, test suites. And pretty concerning : chat logs and even keystrokes from employees nowadays.

The way we train specialized bots now is incredibly inefficient, that part is rapidly improving.

mattmanser 3 hours ago | parent | prev | next [-]

One AI can't vibe code out of the mess, so you'd make another AI trained on getting out of vibe coded messes?

That's serious levels of circular thinking right there.

fooker 2 hours ago | parent [-]

This is literally how training humans have worked for thousands of years.

We train humans to do things untrained humans can not do.

kilroy123 3 hours ago | parent | prev [-]

I think that will happen. I think several things can be true at the same time:

- AI Hype

- AI Psychosis

- AI keeps getting better and better until it can work around big AI slop code bases

bluefirebrand 2 hours ago | parent [-]

> AI keeps getting better and better until it can work around big AI slop code bases

The belief in this is a form of AI psychosis, I think.

Maybe in the future but certainly no evidence of this anytime soon

fooker 2 hours ago | parent | next [-]

> Maybe in the future but certainly no evidence of this anytime soon

Here's some anecdotal evidence from me - I cleaned up multiple GPT 4.x era vibecoded projects recently with the latest claude model and integrated one of those into a fairly large open source codebase.

This is something AI completely failed at last year.

Maybe you should try something like this or listen to success stories before claiming 'certainly no evidence' in future?

whimsicalism 2 hours ago | parent | prev | next [-]

No evidence? Chatgpt came out 3 years ago. You basically just need to stick a ruler up on a curve

asveikau an hour ago | parent [-]

I'm no expert, but the skeptic's opinion I've heard would be to ask:

What evidence is there that we're not at or close to a plateau of what LLMs are capable of? How do you know the growth rate from 2023 to present will continue into 2029? eg. Is it more training data? More GPUs? What if we're kind of reaching the limits of those things already?

whimsicalism an hour ago | parent | next [-]

Ultimately, you are describing a fundamental problem with induction -- Hume's problem of induction to be specific. How can we know that anything that has been shown empirically in the past will continue to be true - we can't. Best to investigate mechanistically:

I don't see why we would assume that we are at a plateau for RL. In many other settings, Go for instance, RL continues to scale until you reach compute limits. Some things are more easily RL'd than others, but ultimately this largely unlocks data. We are not yet compute/energy/physical world constrained. I think you would start observing clear changes in the world around you before that becomes a true bottleneck. Regardless, currently the vast majority of compute is used for inference not training so the compute overhang is large.

Assuming that we plateau at {insert current moment} seems wishful and I've already had this conversation any number of times on this exact forum at every level of capability [3.5, 4, o1, o3, 4.6/5.5, mythos] from Nov 2022 onwards.

literalAardvark an hour ago | parent | prev [-]

Since we're not experts, we treat it as a black box. What are the results? Is the quality of the results improving? Is the improvement accelerating or decelerating?

And the answer appears to be that the improvement is accelerating. So how could it be stopping?

https://metr.org/time-horizons/

ashdksnndck 2 hours ago | parent | prev | next [-]

I have personally had success telling Claude that some AI-written system is too complicated and ask it to rewrite it in a more logical way. This sometimes results in thousands of lines of code being deleted. I give an instruction like that if I see certain red flags, eg:

1) same business logic implemented in two different places, with extra code to sync between them

2) fixing apparently simple bugs results in lots of new code being written

It’s a sign I need to at least temporarily dedicate more effort to overseeing work in that area.

I somewhat agree with the AI psychosis framing of the OP. It takes some taste and discipline to avoid letting things dissolve into complete slop.

asveikau an hour ago | parent | prev [-]

It's amusing to me that:

* A belief that AI will keep getting better, presented without evidence, does not yield a lot of skepticism around these parts.

* Your comment saying it is wrong to believe AI will keep getting better, also presented without evidence, is downvoted.

m463 an hour ago | parent | prev | next [-]

> Purely AI written systems will scale to a point of complexity that no human can ever understand

I think it will be needless verbose complexity.

I kind of imagine someone having an unlimited budget of free amazon stuff shipped to their house.

In theory, they are living a prosperous life of plenty.

In reality, they will be drowning in something that isn't prosperity.

gerdesj 2 hours ago | parent | prev | next [-]

"Purely AI written systems will scale to a point of complexity"

You have not seen the spreadsheets that accounts run the firm on.

Bloody kids!

2 hours ago | parent [-]
[deleted]
badtuple an hour ago | parent | prev | next [-]

I've already done a handful of these gigs for early vibecoded products that had collapsed in on themselves. The scope of work was to stabilize the product and only make existing features work.

The issues have all been structural, not local. It's easier to treat it like a rewrite using the original as a super detailed product spec. Working on the existing codebase works, but you have to aggressively modularize everything anyway to untangle it rather than attack it from the top down.

All of these projects have gone well, but I haven't run into a case where a feature they thought was implemented isn't possible. That will happen eventually.

It's honestly good, quick work as a contractor. But I do hope they invest in building expertise from that point rather than treating it like a stable base to continue vibecoding on.

hughw 2 hours ago | parent | prev | next [-]

But it's so easy now to redo it all ground up, and if models improve, do it better next time.

I exaggerate only a little.

Jagerbizzle 2 hours ago | parent | next [-]

I'm with you on this one, having "vibe coded" some smaller internal tools on GPT 5, and then re-vibed it on Opus 4.6 and 5.5 -- they basically just fixed all of the problems without me doing much of anything other than prompting it to look at the existing code and make it "better".

jimbokun 2 hours ago | parent | prev [-]

How much is your budget for tokens?

Aperocky 2 hours ago | parent | prev | next [-]

> reach a stable set of design principles

Are you sure about this? Yes, there is a stable set, but they are used in all of the wrong places, particularly in places where they don't belong because juniors and now AIs can recite them and want to use them everywhere. That's not even discussing whether the stable set itself is correct or not - it's dubious at this point.

spamizbad an hour ago | parent | prev | next [-]

What you're describing really isn't a new problem for organizations. Historically it's been a team of humans not using AI who gets over their skis and they have to have other more capable humans (also not using AI) to bail them out.

jimbokun 2 hours ago | parent | prev | next [-]

Those design principles it will take us 20 years to learn are just the principles for writing good maintainable, debug-able, understandable code today. Will just take 20 years to figure out they still apply when AI writes the code, too.

therealdrag0 an hour ago | parent | next [-]

Why would it take 20 years to learn? People all around me, in an AI pilled company, have been saying this the whole time,

digitaltrees an hour ago | parent | prev [-]

No. You can use AI to code this way. I’ve successfully steered AI to implement good architecture by moving slowly and constantly course correcting

orev 3 hours ago | parent | prev | next [-]

As the models keep improving, wouldn’t you be able to task a newer AI to “clean up this mess”?

jcalx 23 minutes ago | parent | next [-]

Someone responded to a previous comment of mine [0] positing a Peter principle [1] of slopcoding — it will always be easier to tack on a new feature than to understand a whole system and clean it up. The equilibrium will remain at the point of near, but not total, codebase incomprehensibility.

[0] https://news.ycombinator.com/item?id=48037128#48038639

[1] https://en.wikipedia.org/wiki/Peter_principle

fg137 2 hours ago | parent | prev | next [-]

How is a newer AI going to "clean up" dropped databases, compromised computers or leaked personal data?

(None of above is theoretical)

jeremyjh 3 hours ago | parent | prev | next [-]

Frankly this is what everyone is counting on whether they know it or not. The question though is not “will the models get good enough?”. The question is does the repo even contain enough accurate information content to determine what the system is even supposed to be doing.

malfist 3 hours ago | parent | prev | next [-]

Are they improving? I thought they were just getting more expensive

maplethorpe 3 hours ago | parent [-]

Mythos apparently wrote a poem so beautiful it made Dario cry.

2 hours ago | parent | next [-]
[deleted]
stavros 2 hours ago | parent | prev | next [-]

Roses are red

Violets are blue

AI is great

And so are you

jcgrillo 2 hours ago | parent | prev [-]

Crocodile tears, just like the fake "fear" of its capabilities. Anything to raise another round of dumb oil money.

SpicyLemonZest 2 hours ago | parent | prev | next [-]

People are often skeptical when I say this, but there's simply no guarantee that it's possible in principle to clean up a bad architecture. If your system is "overfitted" to 10,000 requirements from 1,000 customers, it may be impossible to satisfy requirements 10,001 through 10,100 without starting over from scratch.

literalAardvark an hour ago | parent [-]

It may be difficult, but impossible is such a big word to use here

SpicyLemonZest a minute ago | parent [-]

[delayed]

aaron_m04 3 hours ago | parent | prev | next [-]

How could anyone answer that with any level of certainty?

hennell 3 hours ago | parent | prev [-]

Ai runs `rm -rf`

dalmo3 3 hours ago | parent | next [-]

Beyond the Singularity, we reach the Nullarity.

AussieWog93 2 hours ago | parent | prev [-]

https://youtu.be/m0b_D2JgZgY

onlyrealcuzzo an hour ago | parent | prev | next [-]

> Purely AI written systems will scale to a point of complexity that no human can ever understand

In their current forms, it's unlikely for a product that actually needs to work.

It's not getting that complex and working with current LLMs.

jatora an hour ago | parent | prev | next [-]

Interesting perspective. Fundamentally at conflict with the data, science, and 20+ year trends of AI coding systems - to the point of dogmatism. But interesting from a sociological point of view.

alhazrod 3 hours ago | parent | prev | next [-]

The complexity you would come to the rescue to solve, would that be from AI or from the style of programming you let the AI have? I mean, you have very different problems if you use functional style vs object-oriented. It is up to the programmer to realize they want a functional style and request that from the AI, as much as possible. Even AI cannot imagine every state transition, unless it is so smart that it should be the one telling you what to do.

whimsicalism 2 hours ago | parent | prev | next [-]

I'm sure AI capabilities will plateau any moment now..

m101 an hour ago | parent | prev | next [-]

is this true because training companies have not been training AI for both performance and brevity (or some other metric like that)? If this becomes a much more serious issue surely they would adjust the training processes

jiggawatts 3 hours ago | parent | prev | next [-]

> I think AI rescue consulting is going to be come a significant mode of high value consulting

I thought the same when I saw development outsourced to Indians that struggled to write a for loop.

I was wrong.

It turns out that customers will keep doubling down on mistakes until they’re out of funds, and then they’ll hire the cheapest consultants they can find to fix the mess with whatever spare change they can find under the couch cushions.

Source: being called in with a one week time budget to fix a mess built up over years and millions of dollars.

jimbokun 2 hours ago | parent [-]

What happened after development was out sourced to Indians: developer salaries continued to rise much faster than general wages.

bombcar an hour ago | parent [-]

If you work like you're outsourcing to the worst consultancy firms, your use of AI will be ... pretty productive, actually.

altairprime 3 hours ago | parent | prev | next [-]

Financial auditing with pre-AI technical chops will be uniquely niche-valuable, too :)

luxuryballs 41 minutes ago | parent | prev | next [-]

This is def true but I also wonder if AI models and context sizes and capabilities will scale to keep up and eventually be able to untangle the mess.

hgs6 2 hours ago | parent | prev | next [-]

Have you watched Jurassic Park? That story is not about Dinos.

uuyy 3 hours ago | parent | prev | next [-]

AI janitors

kibwen 3 hours ago | parent | next [-]

Not janitors. Hazmat cleanup crews.

jcgrillo 3 hours ago | parent [-]

Like this: https://en.wikipedia.org/wiki/Times_Beach%2C_Missouri

Scrape off all the soil, put it in casks, and bury it in a concrete bunker for 10000 years. Then relocate everyone and attempt to rebuild.

Brian_K_White 3 hours ago | parent | prev [-]

It's kind of like producing code is becoming more like farming.

We didn't create the dna we rely on to produce food and lumber, we just set up the conditions and hope the process produces something we want instead of deleting all the bannannas.

Farming is a fine an honorable and valuable function for society, but I have no interest in being a farmer. I build things, I don't plant seeds and pray to the gods and hope they grow into something I want.

nradov 3 hours ago | parent [-]

Prayers are for weather. Pretty much all farmed plant, animal, and fungus species have been selectively bred or genetically modified. Farmers know what's going to grow.

Brian_K_White 3 hours ago | parent [-]

Farming has merely a lot of study and input into the process, very little actual control and no determinism at all. We know how to improve chances is all. The fact that we breed and "engineer" is like a drop in the bucket.

nradov 9 minutes ago | parent | next [-]

Tell me you've never done any farming without telling me you've never done any farming. There is certainly risk in the business due to market fluctuations, weather, natural disasters, disease, and pests. But the final product is highly deterministic. Almost all genetic variability has been expunged from major food production species in a relentless pursuit of predictable yield. Everything looks and tastes the same. We can debate whether that's a good thing but it is the reality for most farmers.

bluefirebrand 2 hours ago | parent | prev [-]

It's pretty deterministic in that if you plant corn you will grow corn not beets, you know?

If the farming situation were as dire as you seem to suggest, we'd have unpredictable famines all the time, but we don't

Brian_K_White an hour ago | parent [-]

You might grow corn, or you might grow defective unusable corn and/or any number of other things like locusts or fungi or other plants that decide to grow in the place where you planted corn. Sure, the corn seeds will not produce ball bearings. Genius observation. There are about an infinity of other things that can and do happen besides that.

Planting is merely setting up the conditions. We didn't write the dna, we couldn't write the dna if we wanted to because we are an infinity away from understanding all the actual processes that descend from the dna. And when we utilize the dna that we simply found and didn't and couln't hope to write, it's always, at best, a case of hoping it goes right again this time.

2 hours ago | parent | prev | next [-]
[deleted]
jcgrillo 3 hours ago | parent | prev [-]

> Somewhere in the future, the new software engineering will be primarily about principles to avoid this in the first...

It's really nowhere near as complicated as making distributed systems reliable. It's really quite simple: read a fucking book.

Well, actually read a lot of books. And write a lot of software. And read a lot of software. And do your goddamn job, engineer. Be honest about what you know, what you know you don't know, and what you urgently need to find out next.

There is no magic. Hard work is hard. If you don't like it get the fuck out of this profession and find a different one to ruin.

We all need to get a hell of a lot more hostile and unwelcoming towards these lazy assholes.

3 hours ago | parent [-]
[deleted]