| ▲ | QuantumNomad_ 4 hours ago |
| People in the future are going to wonder what the hell we were thinking, when 30 years down the line everything is a hot mess of billions of lines of code generated by LLMs that no human has read almost any of it and is no longer possible for anyone to maintain neither with nor without LLMs. And the LLM generated garbage will have drowned out all of the good quality code that ever existed and no one will be able to find even human generated code anymore on the internet. Makes me want to just give up programming forever and never use a computer again. |
|
| ▲ | pllbnk 2 hours ago | parent | next [-] |
| I think it’s a mistake to think that we will be blindly going in this direction for many years and then suddenly collectively wake up and realize what have we done. It’s a great filter and a great opportunity. If LLMs stop improving at the pace of the last few years (I believe they already are slowing down) then they will still manage to crank out billions lines of code which they themselves won’t be able to grep and reason through, leading to drop in quality and lost revenue for the companies that choose to go all-in with LLMs. But let’s be realistic - modern LLMs are still a great and useful tool when used properly so they will stay. Our goal will be to keep them on track and reduce the negative impact of hallucinations. As a result software industry will move away from large complex interconnected systems that have millions of features but only a few of them actively used, to small high quality targeted tools. Because their work will be easier to verify and to control the side effects. |
| |
| ▲ | lelanthran 2 hours ago | parent | next [-] | | > If LLMs stop improving at the pace of the last few years (I believe they already are slowing down) Depending on how you measure "improvement" they already have or they never will :-/ Measuring capability of the model as a ratio of context length, you reach the limits at around 300k-400k tokens of context; after that you have diminishing returns. We passed this point. Measuring capability purely by output, smarter harnesses in the future may unlock even more improvements in outputs; basically a twist on the "Sufficiently Smart Compiler" (https://wiki.c2.com/?SufficientlySmartCompiler=) That's the two extremes but there's more on the spectrum in between. | | |
| ▲ | rgbrenner an hour ago | parent [-] | | 300k-400k isn’t the current limit if you create modules and/or organize the code reasonably.. for the same reason we do this for humans: it allows us to interact with a component without loading the internals into out context. you can also execute larger tasks than this using subagents to divide the work so each segment doesn’t exceed the usable context window. i regular execute tasks that require hundreds of subagents, for example. in practice the context window is effectively unlimited or at least exceptionally high — 100m+ tokens. it just requires you to structure the work so it can be done effectively — not so dissimilar to what you would do for a person | | |
| ▲ | jmalicki an hour ago | parent [-] | | That makes it not a context window. How to organize code like you said, and how agents interact with it, to keep the actual context window small is the fundamental challenge. | | |
| ▲ | lelanthran an hour ago | parent [-] | | I keep getting surprised that people who are all-in on this (" i regular execute tasks that require hundreds of subagents ") don't have any idea of what is happening even a single layer below their interface to the LLM ("in practice the context window is effectively unlimited or at least exceptionally high — 100m+ tokens.") I looked at that response by GP (rgbrenner) and refrained from replying because if someone is both running hundreds of agents at a time AND oblivious to what "context window" means, there is no possible sane discourse that would result from any engagement. |
|
|
| |
| ▲ | leptons 2 hours ago | parent | prev [-] | | I wish I got to hallucinate at work, and just get a pat on the head for constantly doing the wrong thing. | | |
| ▲ | oompydoompy74 8 minutes ago | parent | next [-] | | The title for that is Director, VP, or CTO at any given large enterprise company. | |
| ▲ | pllbnk an hour ago | parent | prev | next [-] | | Maybe I am unlucky but I had worked with too many developers who couldn't make a good decision if their life depended on it. LLMs at least know how to convince you of their decisions with strong arguments. | |
| ▲ | 2ndorderthought 2 hours ago | parent | prev [-] | | I mean you can do that, but the job probably doesn't pay too much. Might enrich your spirituality though. |
|
|
|
| ▲ | ilaksh an hour ago | parent | prev | next [-] |
| 30 years down the line a human will wake up in his climate controlled bed in an idyllic large scale people-zoo, think about what information he wants, and immediately his 900TB ferroelectric compute-in-memory exobrain will read his thoughts via his brain-computer-interface, and render a custom 3d visualization of that information floating in front of him. There will be no separate code stage, just neural rendering of data to pixels. |
| |
|
| ▲ | jf22 4 hours ago | parent | prev | next [-] |
| First, most software is already a hot mess. Second, LLM code can be less of a hot mess than human written code if you put in the time to train/prompt/verify/review. Generating perfect well patterned SOLID and unit tested code with no warnings or anti-patterns has never been easier. |
| |
| ▲ | yakattak 3 hours ago | parent | next [-] | | The only people who are going to put in the time, are people who care enough to. The problem is you have people who didn’t care before who were equipped with a garden hose. Now that they have a fully pressurized fire hose they can make more of a mess faster. | | |
| ▲ | senordevnyc 2 hours ago | parent | next [-] | | Then they should be easy to defeat. Why are you complaining? | | |
| ▲ | themgt 2 hours ago | parent | next [-] | | As an author of fine literature, these million monkeys on typewriters simply upset my sense of dignity. And to imagine the impoverished prose so many readers shalt forthwith be perusing! | |
| ▲ | yakattak 2 hours ago | parent | prev [-] | | Defeat in what aspect? | | |
| ▲ | senordevnyc 2 hours ago | parent [-] | | Compete with, for jobs, customers, investment, etc. | | |
| ▲ | yakattak 2 hours ago | parent [-] | | Maybe. But it depends on the metric. It seems like orgs are focused on PR count and token usage. Issues caused by poor code are often lagging indicators so it’s asymmetrical in that aspect. Write lots of code now and statistically look great, while the impact won’t be felt for a much larger range of time. With the job search and whatnot then yeah, caring becomes a lot more important. That’s true. |
|
|
| |
| ▲ | risyachka 3 hours ago | parent | prev | next [-] | | This is so on point that I want to cry. | |
| ▲ | Daishiman 2 hours ago | parent | prev [-] | | Hard disagree. LLMs are fantastic for fixing bad architecture that's been around for a decade because nobody was willing to touch it. I can have it write tons and tons of sanity checks and then have it rewrite functionality piece by piece with far more verification than what I'd get from most engineers. It's not immediate, it still takes weeks if you want to actually do QA and roll out to prod, but it's definitely better than the pre-LLM alternatives. | | |
| |
| ▲ | switchbak 4 hours ago | parent | prev | next [-] | | Like with a lot of things in this space, it depends where you invest your effort. If you care about quality design and good code, you can definitely get there - but that doesn't happen by default. With the right investment, we could certainly have tooling that creates and maintains very good designs out of the box. My bet is that we'll continue chasing quick and hacky code, mostly because that's the majority of the code that it was trained on, and because the majority of people seem to be interested in a quick result vs a long-term maintainable one. | |
| ▲ | jplusequalt an hour ago | parent | prev | next [-] | | >First, most software is already a hot mess. That the industry was already routinely dealing with fires of it's own creation is not a valid reason to start cooking with gasoline. | | |
| ▲ | jf22 an hour ago | parent [-] | | But we aren't cooking with gas. We are cooking with a more controlled burner than ever that can download a clean code claude skill and be committing better code than you or I could write. What would normally be considered overengineered gold plating is "free" now. |
| |
| ▲ | glouwbug 4 hours ago | parent | prev [-] | | Right, but it takes one to know one. Many don’t have the ability to decipher what’s good stable output or not |
|
|
| ▲ | ativzzz 4 hours ago | parent | prev | next [-] |
| By then, the fix will be easy. Fire up the latest LLM, point it at your codebase and tell it "rewrite this from scratch. do it well. fix the architecture mistakes" |
| |
| ▲ | jcalx 3 hours ago | parent | next [-] | | There is definitely going to be some Wirth's law-like [0] effect about the asymmetry of software complexity outpacing LLMs' abilities to untangle said software. Claude 9.2 Optimus Prime might be able to wrangle 1M LoC, but somehow YC 2035 will have some Series A startup with 1B+ LoC in prod — we'll always have software companies teetering on the very edge of unmaintainability. [0] https://en.wikipedia.org/wiki/Wirth%27s_law | | |
| ▲ | AlotOfReading 2 minutes ago | parent [-] | | It's the Peter principle for computers. Codebases expand to the limits of the organization's ability to manage them. If you make one person use ed to write code for a bare metal environment, you'll get a comparatively small, laser-focused codebase. If you task a hundred modern developers to solve the same problem, you'll get a Linux box device running a million lines of JavaScript. Same thing happens in other fields. A rich country and a poor country might build equivalent roads, but they won't pay the same price for them. |
| |
| ▲ | faizshah 3 hours ago | parent | prev | next [-] | | It won't be an LLM that does it, the entire feature of an LLM is it produces generalizable reasonably "correct" text in response to a context. The system that makes it have an opinion about good vs bad architecture or engineering sensibilities will be something on top of the transformer and probably something more deterministic than a prompt. | |
| ▲ | hasbot 4 hours ago | parent | prev | next [-] | | We can do this today too (but definitely hopefully future LLMs make better architectural decisions). With Claude, I've been working on an application for the last 2 months. I didn't have a great vision of what I wanted when I started but I didn't want that to slow me down. The architecture is terrible - Claude separated some functionality into different classes but did a bad job at it and created a big ball of mud. Now that I finally have my vision locked down and implemented (albeit poorly), it'd be a great time to throw it away and start over. It'd be interesting to see the result and see how long it takes. | | |
| ▲ | ativzzz 2 hours ago | parent [-] | | Just have claude (or gpt maybe) do an architecture review and request a multi-phase refactoring plan. This is probably better to do incrementally as you notice the balls of mud forming but it might not be too late. Either way, if it does something you don't like, `git checkout` and start over |
| |
| ▲ | bulbar 4 hours ago | parent | prev | next [-] | | Will work just as good as today or 20 years ago. | | |
| ▲ | cortesoft 4 hours ago | parent [-] | | Are you suggesting AI coding was as good 20 years ago as it is today? | | |
| ▲ | hrldcpr 4 hours ago | parent | next [-] | | I think they're being sarcastic, saying that rewrites from scratch have rarely worked well (whether done by AI or humans). | |
| ▲ | vrganj 3 hours ago | parent | prev [-] | | It sure wrote less crappy code. |
|
| |
| ▲ | kurthr 4 hours ago | parent | prev | next [-] | | "Write me a really cool game, that will make me lots of money, fast!" | | |
| ▲ | KumaBear 4 hours ago | parent | next [-] | | Make me a 1hr episode of my favorite book. Make it as lore accurate as possible. Plot out the script for the next 100 episodes. | |
| ▲ | estimator7292 4 hours ago | parent | prev [-] | | I see your point, however: EA sports has been doing this for literally the entire lifetime of gaming as an industry | | |
| ▲ | DonHopkins 3 hours ago | parent [-] | | Electronic Sharts slogans and franchises: "Shit's in the Game!" "Chunder Everything" "Maddening NFL 26" "FIFiAsco 26" "UFC 26 (Un Finished Code)" "The Shits 4" "Battlefailed" "Need for Greed" |
|
| |
| ▲ | orphea 3 hours ago | parent | prev | next [-] | | Do you think new LLMs are going to write better and better code? When all they are going to have is the slop generated by previous, worse models? | |
| ▲ | fnoef 4 hours ago | parent | prev [-] | | "Make sure to double check everything, and MAKE NO MISTAKES!!!" | | |
|
|
| ▲ | pkulak 32 minutes ago | parent | prev | next [-] |
| I can't get used to vibe-coded projects on Github. One that I was using for a little while is about a year old, with 40,000 commits and 15,000 PRs. And it has "lite" in its name; it's supposed to be the simple alternative. There were so many bugs. I fixed one, submitted a PR, but it was off the first page in hours. It will never be merged. I moved to a different project with a bit less... velocity, and it has been way smoother. |
|
| ▲ | genghisjahn 4 hours ago | parent | prev | next [-] |
| I'm generally pro "llm assisted coding" or whatever you want to call it. But I do somethings think about the Butlerian Jihad from Dune. https://en.wikipedia.org/wiki/Dune:_The_Butlerian_Jihad |
| |
| ▲ | hermitShell 4 hours ago | parent [-] | | If you like sci-fi takes on software systems, check out Vernor Vinge "A Fire upon the deep" and sequels. I recall ship systems software is something like all the code humanity has ever written, plus centuries of LLM churn. One of the protagonists is a space faring software developer particularly good with legacy code. We are used to thinking about software like in the article, a program that runs deterministically in an OS. Where we are headed might be more like where the LLM or AI system is the OS, and accomplishes things we want through a combination of pre-written legacy software, and perhaps able to accomplish new things on the fly. | | |
| ▲ | genghisjahn 21 minutes ago | parent | next [-] | | Ordered Fire Upon the Deep. Looks interesting. | |
| ▲ | genghisjahn 3 hours ago | parent | prev | next [-] | | Interesting, I kinda do this. Sometimes when an LLM solves a problem for me, I have it write code so that I can reuse that exact same approach deterministically(and I line by line check it). Now I have about a dozen CLI commands that the LLM can use and I'm reasonably (although not 100%) sure I'll get an expected outcome. Really helpful with debugging via steam pipe and connecting to read replicas. | |
| ▲ | Izkata 3 hours ago | parent | prev | next [-] | | Sounds like a recipe for Star Trek holodeck malfunctions. | |
| ▲ | DonHopkins 3 hours ago | parent | prev [-] | | Pham Nuwen is a master of vibe patching legacy sedimentary software. |
|
|
|
| ▲ | michelb 3 hours ago | parent | prev | next [-] |
| If 30 years down the line I still have to look at code, maintain code, or even worry in the slightest about code, something went deeply wrong. |
| |
| ▲ | skydhash 3 hours ago | parent [-] | | Code will never go away. Code was there before computer hardware and it will always be there. Code is (almost?) all of computation theory so unless we throw computers away, we shall always use code. | | |
| ▲ | phainopepla2 2 hours ago | parent | next [-] | | They're not suggesting that code will go away, but rather that it will be abstracted beneath an LLM interface, so that writing code in the future will be like writing assembly today: some people do it for fun or niche reasons, but otherwise it's not necessary, and most developers can't do it. Whether that happens or not is a different question, but I believe that's what they're suggesting. | | |
| ▲ | skydhash an hour ago | parent [-] | | Code is formal and there are basic axioms that grounds its semantic. You can build great constructs on top of those semantics, but you can’t strip away their formality without the whole thing being meaningless. And if you can formalize a statement well enough to remove all ambiguity, then it will turn into code. Programming is taking ambiguous specs and turning them into formal programs. It’s clerical work, taking each terms of the specs and each statements, ensuring that they have a single definition and then write that definition with a programming language. The hard work here is finding that definition and ensuring that it’s singular across the specs. Software Engineering is ensuring that programming is sustainable. Specs rarely stay static and are often full of unknowns. So you research those unknowns and try to keep the cost of changing the code (to match the new version of the specs) low. The former is where I spend the majority of my time. The latter is why I write code that not necessary right now or in a way that doesn’t matter to the computer so that I can be flexible in the future. While both activities are closely related, they’re not the same. Using LLM to formalize statements is gambling. And if your statement is already formal, what you want is a DSL or a library. Using LLM for research can help, but mostly as a stepping stone for the real research (to eliminate hallucinations). |
| |
| ▲ | 2 hours ago | parent | prev [-] | | [deleted] |
|
|
|
| ▲ | Keyframe an hour ago | parent | prev | next [-] |
| Why are we pretending everyone's code is an etalon of quality? Most software out there is probably hot mess already. No think behind it, let alone ultrathink. |
| |
| ▲ | Maxatar an hour ago | parent [-] | | Exactly, before the rise of LLMs it was not at all uncommon to hear people claiming that their job is to just Google API calls or copy and paste code from Stackoverflow. The context back then was that companies are being picky by hiring people who can demonstrate some modicum of understanding of data structures and algorithms because all any developer does is tweak some CSS or make some calls to a database to glue together a CRUD app... why should anyone be expected to know how to reverse a linked list, or how a basic sorting algorithm works... just download an npm package to do that stuff and glue it all together with a series of nested for loops. With the rise of LLMs that do all of that... those people shutup and shutup real fast. |
|
|
| ▲ | stronglikedan an hour ago | parent | prev | next [-] |
| > is no longer possible for anyone to maintain neither with nor without LLMs. That's what the Tech-Priests are for. |
| |
| ▲ | ofjcihen an hour ago | parent [-] | | <INTERROGATIVE-HAVE YOU TRIED APPLYING INCENSE AND RECITING THE SACRED TECH LITANIES?> |
|
|
| ▲ | butlike 18 minutes ago | parent | prev | next [-] |
| People, as a rule, don't really "go backwards." We didn't really walk back on the industrial revolution, and we're probably not going to walk back from this day-and-age's activities. It's only unsettling until the changes are accepted. The old timers can vie for a time before "all this" when they were children and all their needs were given to them by their now-deceased parents, and the cycle can continue on, yet again. |
|
| ▲ | murukesh_s 4 hours ago | parent | prev | next [-] |
| Hello from assembly programmers to present day javascript folks. Joke aside, I sometimes think how VS Code is written in such layers and layers of code - ~200mb of minified code - Java based IDEs were worser with almost 1GB of code (libs/dependencies). And VS Code did beat native editors (Sublime) of its time to dominate now - may be because of the business model (open & free vs freemium). But it does the job quite well IMO. And it enabled swarms of startups to go to market including billion $ wrappers - including Cursor, Antigravity and almost all UI coding agents. I remember backend developers (Java/C++ type) looking down upon Javascript developers as if we are from an inferior planet or something. How many of us remember that VSCode is actually a browser wrapped inside a native frame? |
| |
| ▲ | k__ 3 hours ago | parent | next [-] | | To be fair, MS send a world class engineer to make JavaScript usable for codebases at that scale. | |
| ▲ | 000000000001 2 hours ago | parent | prev | next [-] | | >How many of us remember that VSCode is actually a browser wrapped inside a native frame? The new standard, Web Apps. Why update 3 seperate binaries for Win/Lin/Mac when you can do 1 for a web framework and call it a day? | |
| ▲ | skydhash 3 hours ago | parent | prev | next [-] | | VS Code has two things that worked well for it. Web Tech and Money. Web tech makes it easy to write plugins (you already know the stack vs learning python for sublime). And I wonder how much traction it would get if not Microsoft paying devs to wrangle Electron in a usable shape. | |
| ▲ | yakattak 3 hours ago | parent | prev [-] | | [dead] |
|
|
| ▲ | wan23 2 hours ago | parent | prev | next [-] |
| Have you ever encountered the very common real life situation where there's some software that works, and you have a binary for it but you either don't have the source code or it doesn't compile for whatever reason? This is the pre-LLM world. Now, do you think LLMs make this situation better or worse? You may not know what's wrong with your software or how to fix it, but unlike in the past you can throw compute at trying to figure it out, or replicating a subset of it, or even replicating all of it depending on what it is. I think LLMs are making this situation better not worse. |
| |
| ▲ | lelanthran 2 hours ago | parent | next [-] | | I think the problem with that sort of thought is that the burgeoning sizes of output for even trivial software makes it almost a certainty that: a) The stuff output by the existing LLMs is too unwieldy even for them to handle , even if the product itself is a glorified chatbot. b) If all software is throwaway, then the value of all software drops to, effectively, the price of an AI subscription. We'll all be drowning in a market of lemons (https://en.wikipedia.org/wiki/The_Market_for_Lemons), whilst also being producers in said market. | |
| ▲ | kingleopold an hour ago | parent | prev [-] | | another aspect is amount of code LLMs can handle went from few lines to small codebase in few years, so future is just possible for a lot bigger codebases? |
|
|
| ▲ | throw_this_one 4 hours ago | parent | prev | next [-] |
| Why does it matter, as long as it accomplishes the task? |
|
| ▲ | johnbarron 4 hours ago | parent | prev | next [-] |
| There is nothing in the post to support the statement. An interesting personal confession, but it does not establish that vibe coding and agentic engineering are converging as a general phenomenon. As a piece of meat, I look forward to charge rates of $10,000 an hour, to fix code out the vibe code generation. |
| |
|
| ▲ | jimmyjazz14 4 hours ago | parent | prev | next [-] |
| If that is the case market forces would likely favor hand written code and all the slop will be forgotten (unless the slop works fine and is stable). |
| |
| ▲ | xantronix 4 hours ago | parent | next [-] | | The market is hardly as rational as people would like to hope it is, though it does at least have its own twisted sort of internal consistency. | |
| ▲ | lbrito 3 hours ago | parent | prev | next [-] | | I don't think that's how money works. Enough people have poured enough money into this thing that the actual, measurable results/efficacy/ROI are of secondary importance (to put it mildly). At this point AI adoption is (at least sold as) a fait accompli. | |
| ▲ | devin 4 hours ago | parent | prev [-] | | This is wishful thinking. The force of the market is "number go up". Quality increasingly has less and less of a role in the equation. You will eat your slop, and you will like it. It will be the only choice you have. | | |
| ▲ | sesky 4 hours ago | parent | next [-] | | But the quality of code was already very bad due to market forces. Most code at large companies is notoriously poor despite the talent density, because the incentives are not there to tackle tech debt or improve code quality. With such a low baseline, there is an optimistic perspective that LLMs could improve the situation. LLMs can produce excellent code when prompted or reviewed well. Unlike human employees, the model does not worry about getting a 'partially meets expectations' rating or avoid the drudgery of cleaning up other people's code. | | |
| ▲ | devin 3 hours ago | parent | next [-] | | The model is optimized in a different way to "partially meet expectations". Sycophancy coupled with only really "knowing" what it has been trained on assure a different kind of mediocrity. | |
| ▲ | switchbak 3 hours ago | parent | prev [-] | | The same incentives that discourage good code in pre-AI times are still dominating now. You will be pushed to ship sub-par products in the future, just like you were in the past. AI certainly has the potential to make the underlying code/design a lot cleaner. We will also be working with dramatically more code, at a much higher rate of change. That alone will be a big challenge to keep sustainable. The ones making the decision to under-invest on design are either are unaware of the real costs, or are aware and are deliberately choosing that path - that's not new, and I don't expect it to change. |
| |
| ▲ | tyyyy3 3 hours ago | parent | prev [-] | | I agree generally but there are periods where creative people show up and a whole slew of existing firms go bust/shrink due to one’s ability to envision a path toward creative destruction. |
|
|
|
| ▲ | MagicMoonlight an hour ago | parent | prev | next [-] |
| Have you seen Windows? We already have thirty years of slop. |
|
| ▲ | empath75 4 hours ago | parent | prev | next [-] |
| > People in the future are going to wonder what the hell we were thinking, when 30 years down the line everything is a hot mess of billions of lines of code generated by LLMs that no human has read -- It's just as likely that people will be surprised that we used to have billions of lines of human generated code, that no LLM ever approved. |
|
| ▲ | zuzululu 4 hours ago | parent | prev | next [-] |
| By then AI would be good enough to clean them all up....like I dont get these dooming scenarios they always assume that we are going to be stuck with LLMs and there wont be anything new coming. |
| |
| ▲ | orphea 3 hours ago | parent [-] | | By then AI would be good enough to clean them all up...
[citation needed]To make my comment more on-topic: why do you think this is going to be the case? What newer LLMs will be trained on? | | |
| ▲ | zuzululu 2 hours ago | parent [-] | | well you are assuming that there's not going to be any new progress and that we are going to be stuck with whatever LLM version we have currently |
|
|
|
| ▲ | cj 4 hours ago | parent | prev | next [-] |
| > Makes me want to just give up programming forever and never use a computer again. LLMs aren’t the first thing to come along and change how people develop applications. You had the rise of frameworks like Django, Rails, etc. Also the rise of SPAs. And also the rise of JS as a frontend+backend language. In a 3-5 yeats we’ll have adapted to the new norm like we have in the past |
| |
| ▲ | lbrito 3 hours ago | parent | next [-] | | The difference between writing assembly code and Ruby code is much smaller than the difference between programming and vibe coding. Also, companies are pressuring employees towards adoption in novel ways. There was no such industry-wide pressure by employers in the 90s, 2000s or 2010s for engineers to use a specific tech. | | |
| ▲ | Daishiman an hour ago | parent [-] | | > Also, companies are pressuring employees towards adoption in novel ways. There was no such industry-wide pressure by employers in the 90s, 2000s or 2010s for engineers to use a specific tech. Companies have been enforcing technology mandates since time immemorial. In the early 2000s there were definitely a lot of mandates to move away from commercial UNIX to Linux. Lots of companies began enforcing the switch to PHP, Ruby and Python for new projects. | | |
| ▲ | lbrito 16 minutes ago | parent [-] | | Yes, but the entire industry was not pushing any one single tool at the same time. If you disliked Django, you could go to Rails. If you disliked Rails, you had Phoenix. Etc. Good luck disliking LLM babysitting these days |
|
| |
| ▲ | toraway 3 hours ago | parent | prev [-] | | Or, it could be like asbestos and the immediate benefits are just too appealing to listen to arguments of skeptical naysayers about some vaguely defined problems that are decades away, if they even happen. I use AI tools daily (because they feel like they're helping me)
but it's not exactly hard to imagine scenarios where an explosion of slop piling up plus harm to learning by outsourcing all thinking results in systemic damage that actually slows the pace of technological progress given enough time. History of new technologies tend to average into a positive trend over a long enough time scale but that doesn't mean there aren't individual ups and downs. Including WTF moments looking back at what now seems like baffling decision-making with benefit of hindsight. | | |
| ▲ | Izkata 3 hours ago | parent | next [-] | | Some of us are already experiencing that. For example I handed off an initial version of something some months ago, and the AI-generated stuff they came up with was a huge buggy mess of spaghetti code neither of us understood. Months later we've detangled it, cutting it down to a third the size, making it far simpler to understand, and fixing several bugs in the process (one was even by accident, we'd made note of it, then later when we went to fix it, it was already fixed). | |
| ▲ | cj 3 hours ago | parent | prev [-] | | > Or, it could be like asbestos If it is, the fall out will be way worse than if AI ends up living up to (reasonable) expectations. If it doesn’t, we are going to see over a trillion dollars of capital leave the tech sector, which I think will have worse impacts on the livelihood of tech workers than if AI ends up panning out. This is something the naysayers need to grapple with. We’ve crossed a line where this tech needs to work simply because of the amount of money depending on that fact. | | |
| ▲ | toraway 3 hours ago | parent [-] | | The asbestos hypothetical is a bit different than the "bubble popping" economic crisis scenario though. In this world, AI would just continue being adopted and shoved into every nook and cranny into which it can be made to fit, with valuations only getting bigger and bigger. The damage would come much later, well beyond the point where it could be simply pulled out and replaced without spending massive amounts of money and would also basically necessitate training an entire new generation of engineers. Then the AI giants would start appearing vulnerable like cigarette companies in the 90s while an AI Superfund and interstate class action are being planned but Sam Altman would already be a centitrillionaire at that point so it would be someone else's problem. |
|
|
|
|
| ▲ | beAbU an hour ago | parent | prev [-] |
| Have you ever worked on a legacy codebase with actual good code? I struggle to see the difference between your predicted future and today's reality when it comes to working with legacy disasters. |
| |
| ▲ | jeromegv an hour ago | parent [-] | | Well, on legacy code base, you still needed humans to write those lines of code. There's a maximal amount of lines a human can write in a year. Now with LLM we are talking of millions and millions of line of code that could be generated in a single day. The scale of the problem might not be the same at all. |
|