| ▲ | Has the cost of building software dropped 90%?(martinalderson.com) |
| 73 points by martinald 4 hours ago | 141 comments |
| |
|
| ▲ | sharpy a minute ago | parent | next [-] |
| I think AI can be really powerful tool. I am more productive with it than not, but a lot of my time interacting with AI is reviewing its code, finding problems with it (I always find some issues with it), and telling it what to do differently multiple times, and eventually giving up, and fixing up the code by hand. But it definitely has reduced average time it takes me to implement features. But I also worry that not everyone would be responsible and check/fix AI generated code. |
|
| ▲ | nine_k 2 hours ago | parent | prev | next [-] |
| Had the cost of building custom software dropped 90%, we would be seeing a flurry of low-cost, decent-quality SaaS offering all over the marketplace, possibly undercutting some established players. From where I sit, right now, this does not seem to be the case. This is as if writing down the code is not the biggest problem, or the biggest time sink, of building software. |
| |
| ▲ | codegeek 38 minutes ago | parent | next [-] | | The keyword is "building". Yes costs may have dropped 90% just to build software. But there are 1000 other things that comes after it to run a successful software for months let alone years. - Maintenance, Security - Upgrades and patches - Hosting and ability to maintain uptime with traffic - Support and dealing with customer complexities - New requirements/features - Most importantly, ability to blame someone else (at least for management). Politics plays a part. If you build a tool in-house and it fails, you are on the chopping block. If you buy, you at least can say "Hey everyone else bought it too and I shouldn't be fired for that". Customers pay for all of the above when they buy a SAAS subscription. AI may come for most of the above at some point but not yet. I say give it 3-5 years to see how it all pans out. | |
| ▲ | martinald 2 hours ago | parent | prev | next [-] | | It is happening though internally in businesses I've worked with. A few of them are starting to replace SaaS tools with custom built internal tooling. I suspect this pattern is happening everywhere to a varying level. Often these SaaS tools are expensive, aren't actually that complicated (or if they are complicated, the bit they need isn't) and have limitations. For example, a company I know recently got told their v1 API they relied on on some back office SaaS tool was being deprecated. V2 of the API didn't have the same features. Result = dev spends a week or two rebuilding that tool. It's shipped and in production now. It would have taken similar amount of time to work around the API deprecation. | | |
| ▲ | nugger an hour ago | parent | next [-] | | I don't understand the timelines here at all. | |
| ▲ | lossolo 2 hours ago | parent | prev [-] | | > It is happening though internally in businesses I've worked with How many samples do you have? Which industries are they from? Which SaaS products were they using, exactly and which features? > ...a company I know recently got told their v1 API they relied on on some back office SaaS tool was being deprecated. V2 of the API didn't have the same features ... dev spends a week or two rebuilding that tool Was that SaaS the equivalent of the left-pad Node.js module? | | |
| ▲ | dismantlethesun 10 minutes ago | parent | next [-] | | I'm not the OP, but I do have an annectote. We've got an backend pipeline that does image processing. At every step of the pipeline, it would make copies of small (less than 10MB) files from an S3 storage source, do a task, then copy the results back up to the storage source. Originally, it was using AWS but years ago it was decided that AWS was not cost effective so we turned to another partner OVH and Backblaze. Unfortunately, the reliability and throughput of both of them isn't as consistent as AWS and this has been a constant headache. We were going to go back to AWS or find a new partner, but I nominated we use NFS. So we build nothing, pay nothing, get POSIX semantics back, and speed has gone up 3x. At peak, we only copy 40GB of files per day, so it was never really necessary to use S3 except that our servers were distributed and that was the only way anyone previously could think to give each server the same storage source. While this isn't exactly what the OP and you are talking about, I think it illustrates a fact: SaaS software was seen as the hammer to all nails, giving you solutions and externalizing problems and accountability. Now that either the industry has matured, building in-house is easier, or cost centers need to be reduced, SaaS is going be re-evaluated under the context of 'do we really need it'? I think the answer to many people is going to be no, you don't need enterprise level solutions at all levels of your company, especially if you're not anywhere near the Fortune 1000. | |
| ▲ | wongarsu 43 minutes ago | parent | prev | next [-] | | Lots of companies make good money selling the equivalent of leftpad for confluence or jira. Anecdotally, that's exactly the kind of stuff that gets replaced with homegrown AI-built solutions at our company | |
| ▲ | hobs 21 minutes ago | parent | prev [-] | | I helped a company that is build averse move off of Fivetran to Debezium and some of their own internal tooling for the same workload they are paying 40k less a month (yeah they just raised their prices again). Now, that's not exactly the same thing, but their paucity of skills made them terrified to do something like this before, they had little confidence they could pull it off and their exec team would just scoff and tell them to work on other revenue generating activities. Now the confidence of Claude is hard to shake off of them which is not exactly the way I wanted the pendulum to swing, but its almost 500k yearly back in their pockets. |
|
| |
| ▲ | kenjackson 43 minutes ago | parent | prev | next [-] | | It has dropped by maybe MORE than 90%. My sons school recently asked me to build some tools for them -- I did this over a decade ago for them, for free. I did it again using AI tools (different problem though) and I had it mostly done in 30 minutes (after I got the credentials set up properly -- that took up more time than the main coding part). This was probably several days of work for me in the past. | | |
| ▲ | TheRoque 22 minutes ago | parent [-] | | But in the past, you knew the codebase very well, and it was trivial to implement a fix and upgrade the software. Can the same be done with LLMs ? Well from what I see, it depends on your luck. But if the LLMs can't help you, then you gotta read the whole codebase that you've never read before and you quickly lose the initial benefits. I don't doubt someday we'll get there though. | | |
| ▲ | kenjackson 14 minutes ago | parent | next [-] | | I've hit this in little bursts, but one thing I've found is that LLMs are really good at reasoning about their own code and helping me understand how to diagnose and make fixes. I recently found some assembly source for some old C64 games and used an LLM to walk me through it (purely recreational). It was so good at it. If I was teaching a software engineering class, I'd have students use LLMs to do analysis of large code bases. One of the things we did in grad school was to go through gcc and contribute something to it. Man, that code was so complex and compilers are one of my specialties (at the time). I think having an LLM with me would have made the task 100x easier. | |
| ▲ | jazzyjackson 12 minutes ago | parent | prev | next [-] | | If I haven't looked at my own code in 6 months it might as well have been written by someone else. | | |
| ▲ | kenjackson 10 minutes ago | parent [-] | | The most brilliant programmer I know is me three years ago. I look at code I wrote and I'm literally wondering "how did I figure out how to do that -- that makes no sense, but exactly what is needed!" |
| |
| ▲ | emodendroket 17 minutes ago | parent | prev [-] | | They're better than one might expect at diagnosing issues from the error output or even just screenshots. |
|
| |
| ▲ | thot_experiment 2 hours ago | parent | prev | next [-] | | To be fair, writing a SaaS software is like an order, perhaps two orders of magnitude more effort than writing software that runs on a computer and does the thing you want. There's a ton of stuff that SaaS is used for now that's basically trivial and literally all the "engineering" effort is spent on ensuring vendor lock in and retaining control of the software so that you can force people to keep paying you. | |
| ▲ | jayd16 6 minutes ago | parent | prev | next [-] | | I mean, we have had the tech to crank out some little app for a long time. The point of the Saas used to be that you had a neck to strangle when things went south. I guess these days that's just impossible anyhow and the prices aren't worth it so we're rediscovering that software can be made instead of bought? There have been a lot of little blogs about "home cooking" style apps that you make for yourself. Maybe AI is the microwave meal version. | |
| ▲ | xnx an hour ago | parent | prev | next [-] | | > Had the cost of building custom software dropped 90% It definitely has for me. I'm creating toolS and utilities every week easily that I never would've attempted in the past. > This is as if writing down the code is not the biggest problem, or the biggest time sink, of building software. Lots of people can think logically and organize a process flow, but don't know all the ridiculous code incantations (and worse development and hosting environment details) to turn their plans into tools. It's trivial to one-shot all kinds of impressive toys in Gemini now, but it's going to be an even bigger deal when Google adds some type of persistent data storage. It will be like the rebirth of a fully modern Microsoft Access. | |
| ▲ | klntsky 28 minutes ago | parent | prev | next [-] | | People vibe one-off solutions for themselves all the time. They just don't have the desire to productionalize them. Frankly, product knowledge is something LLMs are not that good at | |
| ▲ | paulddraper 2 hours ago | parent | prev [-] | | > Had the cost of building custom software dropped 90%, we would be seeing a flurry of low-cost, decent-quality SaaS offering all over the marketplace, possibly undercutting some established players. NODS HEAD VIGOROUSLY Last 12 months: Docusign down 37%, Adobe down 38%, Atlassian down 41%, Asana down 41%, Monday.com down 44%, Hubspot down 49%. Eventbrite being bought for pennies. They are being replaced by newer, smaller, cheaper, sometimes internal solutions. | | |
| ▲ | BobbyJo an hour ago | parent [-] | | Stock prices down or revenue down? The former would do very little to support your point. | | |
|
|
|
| ▲ | vb-8448 28 minutes ago | parent | prev | next [-] |
| It's not just about "building" ... who is going to maintain all this new sub-par code pushed to production every day? Who is going to patch all bugs, edge cases and security vulnerabilities? |
| |
| ▲ | sdoering 15 minutes ago | parent | next [-] | | I happily got rid of a legacy application (lost the pitch, another agency now must deal with the shit) I inherited as a somewhat technically savvy person about a year ago. It was built by real people. Not a single line of AI slop in it. It was the most fragile crap I had ever the misfortune to witness. Even in my wildest vibe coding a prototype moments I was not able to get the AI to produce that amount of anti patterns, bad shit and code that would have had Hitchcock running. I think we would be shocked to see what kind of human slop out there is running in production. The scale might change, but at least in this example, if I had rebuilt the app purely by vibe coding the code quality and the security of the code would actually have improved. Even with the lowest vibe coding effort thinkable. I am not in any way condoning (is this the right word) bad practices, or shipping vibe code into prod without very, very thorough review. Far from it. I am just trying to provide a counter point to the narrative, that at least in the medium sized business I got to know in my time consulting/working in agencies, I have seen quite a metric ton of slop, that would make coding agents shiver. | | |
| ▲ | geon 11 minutes ago | parent [-] | | The argument isn’t that all slop is AI, but that all AI is slop. | | |
| ▲ | baq 9 minutes ago | parent | next [-] | | Turns out building enterprise software has more in common with generating slop than not. | |
| ▲ | denuoweb 9 minutes ago | parent | prev [-] | | That argument fails when considered by intelligent people. |
|
| |
| ▲ | soco 26 minutes ago | parent | prev [-] | | The theory goes very simple, you tell the agent to patch the bug. Now the practice though... | | |
| ▲ | fullstackwife 19 minutes ago | parent [-] | | yeah, in practice: would you like to onboard a Boeing 747 where some of the bugs were patched by some agents, what is the percentage risk of malfunction you are going to accept as a passenger? | | |
| ▲ | TuringNYC 12 minutes ago | parent | next [-] | | >> yeah, in practice: would you like to onboard a Boeing 747 where some of the bugs were patched by some agents, In this case, the traditional human process hasn't gone well either. | | | |
| ▲ | emodendroket 18 minutes ago | parent | prev [-] | | No. But most software products are nowhere near that sensitive and very few of them are developed with the level of caution and rigor appropriate for a safety-critical component. |
|
|
|
|
| ▲ | liampulles 7 minutes ago | parent | prev | next [-] |
| I contracted briefly on a post-LLM-boom Excel modernization project (which ended up being consulting mainly, because I had to spend all my time explaining key considerations for a long-running software project that would fit their domain). The company had already tried to push 2 poor data analysts who kind of new Python into the role of vibe coding a Python desktop application that they would then distribute to users. In the best case scenario, these people would have vibe coded an application where the state was held in the UI, with no concept of architectural seperation and no prospects of understanding what the code was doing a couple months from inception (except through the lens of AI sycophancy), all packaged as a desktop application which would generate excel spreadsheets that they would then send to each other via Email (for some reason, this is what they wanted - probably because it is what they know). You can't blame the business for this, because there are no technical people in these orgs. They were very smart people in this case, doing high-end consultancy work themselves, but they are not technical. If I tried to do vibe chemistry, I'm sure it would be equally disastrous. The only thing vibe coding unlocks for these orgs by themselves is to run headfirst into an application which does horrendous things with customer data. It doesn't free up time for me as the experienced dev to bring the cost down, because again, there is so much work needed to bring these orgs to the point where they can actually run and own an internal piece of software that I'm not doing much coding anyway. |
|
| ▲ | debo_ 2 hours ago | parent | prev | next [-] |
| > I'm sure every organisation has hundreds if not thousands of Excel sheets tracking important business processes that would be far better off as a SaaS app. Far better off for who? People constantly dismiss spreadsheets, but in many cases, they are more powerful, more easily used by the people who have the domain knowledge required to properly implement calculations or workflow, and are more or less universally accessible. |
| |
| ▲ | robotresearcher 2 hours ago | parent | next [-] | | Spreadsheets are an incredible tool. They were a key innovation in the history of applications. I love them and use them. But it's very hard to have a large conventional cell-formula spreadsheet that is correct. The programming model / UI are closely coupled, so it's hard to see what's going on once your sheet is above some fairly low complexity. And many workplaces have monstrous sheets that run important things, curated lovingly (?) for many years. I bet many or most of them have significant errors. | | |
| ▲ | ASalazarMX an hour ago | parent [-] | | It's astounding how useful and intuitive they are, but my biggest gripe is how easy is for anyone to mess calculations, say, SUM(<RANGE>), by simply adding one row/column/cell. I use Google Worksheets frequently to track new things that fit into lists/tables, and giving someone else editor access without them knowing a few worksheet nuances means I have to recheck and correct them every month or two. | | |
| ▲ | robotresearcher 6 minutes ago | parent [-] | | Does anyone make a sanity checker for Excel or Sheets that notices things like that? Would be incredibly helpful! |
|
| |
| ▲ | martinald 2 hours ago | parent | prev | next [-] | | Author here. Of course not everything needs to be a web app. But I'm meaning a lot of core sheets I see in businesses need more structure round them. Especially for collaboration, access controls, etc. Not to mention they could do with unit testing. | | |
| ▲ | tonyarkles 2 hours ago | parent | next [-] | | Counterpoint: if a small part of the process is getting tweaked, how responsive can the team responsible for these apps be? That’s the killer feature of spreadsheets for business processes: the accountants can change the accounting spreadsheets, the shipping and receiving people can change theirs, and there’s no team in the way to act as a bottleneck. That’s also the reason that so-called “Shadow IT” exists. Teams will do whatever they need to do to get their jobs done, whether or not IT is going to be helpful in that effort. | | |
| ▲ | chasd00 2 hours ago | parent | next [-] | | i've seen many attempts to turn a widely used spreadsheet into a webapp. Eventually, it becomes an attempt to re-implement spreadsheets. The first time something changes and the user says "well in Excel i would just do this..." the dev team is off chasing existing features of excel for eternity and the users are pissed because it takes so long and is buggy, meanwhile, excel is right there ready and waiting. | |
| ▲ | LPisGood 2 hours ago | parent | prev [-] | | I have never heard of shadow IT. What is that? | | |
| ▲ | _puk 2 minutes ago | parent | next [-] | | It's where you have processes etc set up to manage your IT infra, but these very processes often make it impossible / too time consuming to use anything. The team that needs it ends up managing things itself without central IT support (or visibility, or security etc..) Think being given a locked down laptop and no admin access. Either get IT to give you admin access or buy another laptop that isn't visible to IT and let's you install whatever you need to get your job done. The latter is often quicker and easier. | |
| ▲ | analog31 an hour ago | parent | prev [-] | | It's when the users start taking care of IT issues themselves. Maybe the name comes from the Shadow Cabinet in England? Where it might not be obvious is that IT in this context is not just pulling wires and approving tickets, but is "information technology" in the broader sense of using computers to solve problems. This could mean creating custom apps, databases, etc. A huge amount of this goes on in most businesses. Solutions can range from trivial to massive and mission-critical. |
|
| |
| ▲ | swatcoder 2 hours ago | parent | prev | next [-] | | It's rare than a third-party SaaS can approximate one of these "core sheets" and most of the exceptions have already been explored over the last several decades years. You have to remember that an SaaS, just like shrink-wrap software, reflects someone else's model of of a process or workflow and the model and implementation evolve per the timeline/agenda of its publisher. For certain parts of certain workflows, where there's a highly normative and robust industry standard, like invoicing or accounting or inventory tracking, that compromise is worthwhile and we've had both shrink-wrap and SaaS products servicing those needs for a very very long time. We see churn in which application is most popular and what it's interface and pricing look like, but the domains being served have mostly been constant (mostly only growing as new business lines/fashions emerge and mature). Most of the stuff that remains in a "core sheet" could benefit from the attention of a practiced engineer who could make it more reliable and robust, but almost always reflects that the represented business process is somehow peculiar to the organization. As Access and FoxPro and VBA and Zapier and so many tools have done before, LLM coding assistants and software building tools offer some promise in shaking some of these up by letting orgs convert their "core sheets" to "internal applications". But that's not an opportunity for SaaS entrepreneurs. It's an opportunity for LLM experts to try to come in and pitch private, bespoke software solutions for a better deal than whatever the Access guy had promised 20 years ago. Because of the long-term maintenance challenges that still plague code that's too LLM-colored, I wouldn't want to be that expert pitching that work, but it's an opportunity for some ambitious folks for sure. | |
| ▲ | ASalazarMX an hour ago | parent | prev [-] | | > a lot of core sheets I see in businesses need more structure round them We had this decades ago, it was called dBase, but FoxPro (pre-Microsoft) was great too. Visual For Pro or MS Access were a brutal downgrade of every good aspect of it. Imagine if today some startup offered a full-stack(TM) platform that included IDE, a language with SQL-like features, visual UI designer, database; generated small standalone binarires, was performant, and was smaller than most web homepages. There are modern options, like Servoy or Lianja, but they're too "cloudy" to be considered equivalents. Edit: seems like there's OpenXava too, but that is Java-based, too hardcore for non-professional programmers IMO. The beauty of xBase was that even a highschooler could whip out a decent business application if the requirements were modest. |
| |
| ▲ | nesarkvechnep 2 hours ago | parent | prev | next [-] | | I’m yet to see a spreadsheet workflow successfully replaced by something else. | | |
| ▲ | crubier 2 hours ago | parent [-] | | Streamlit apps or similar are doing a great job at this where I'm at. As simple to build and deploy as Excel, but with the right data types, the right UI, the right access and version control, the right programming language that LLMs understand, the right SW ecosystem and packages, etc. | | |
| ▲ | SauntSolaire an hour ago | parent [-] | | Are they actually as simply to deploy as Excel? My guess would be that most streamlit apps never make it further than the computer they're written on. |
|
| |
| ▲ | jimbokun 2 hours ago | parent | prev [-] | | Better security. Better availability. Less chance of losing data. Assuming the SaaS is implemented competently, of course. Otherwise there's not much advantage. |
|
|
| ▲ | blauditore 17 minutes ago | parent | prev | next [-] |
| These kind of future prediction posts keep coming, and I'm tired of them. Reality is always more boring, less extreme, and slower at changing, because there are too many factors involved, and the authors never account for everything. Maybe we should collect all of these predictions, then go back in 5-10 years and see if anyone was actually right. |
|
| ▲ | Normal_gaussian 18 minutes ago | parent | prev | next [-] |
| > I've had Claude Code write an entire unit/integration test suite in a few hours (300+ tests) for a fairly complex internal tool. This would take me, or many developers I know and respect, days to write by hand. I'm not sure about this. The tests I've gotten out in a few hours are the kind I'd approve if another dev sent then but haven't really ended up finding meaningful issues. |
| |
| ▲ | kace91 11 minutes ago | parent | next [-] | | Have you noticed how it's never "I got this awesome code!"? It's always "I got good code, trust me". People say their prompts are good, awesome code is being generated, it solved a month's worth of work in a minute. Nobody comes with receipts. | | |
| ▲ | dboreham 9 minutes ago | parent [-] | | I keep seeing posts like this so I decided to video record all my LLM coding sessions and post them on YouTube. Early days, I only had the idea on Saturday. |
| |
| ▲ | martinald 10 minutes ago | parent | prev | next [-] | | Just to be clear, they weren't stupid 'is 1+1=2' type tests. I had the agent scan the UX of the app being built, find all the common flows and save them to a markdown file. I then asked the agent to find edge cases for them and come up with tests for those scenarios. I then set off parallel subagents to develop the the test suite. It found some really interesting edge cases running them - so even if they never failed again there is value there. I do realise in hindsight it makes it sound like the tests were just a load of nonsense. I was blown away with how well Claude Code + Opus 4.5 + 6 parallel subagents handled this. | |
| ▲ | Aeolun 10 minutes ago | parent | prev [-] | | I find I get better tests if I use agents to generate tests. |
|
|
| ▲ | devnull3 29 minutes ago | parent | prev | next [-] |
| I think the cost of prototyping has definitely gone down. Developing production grade software which you want to people to rely on and pay for it is not gone down so much. The "weak" link is still human. Debugging complex production issues needs intimate knowledge of the code. Not gonna happen in next 3-4 years atleast. |
|
| ▲ | bob1029 2 minutes ago | parent | prev | next [-] |
| In context of B2B SaaS products that require a high degree of customization per client, I think there could be an argument for this figure. The biggest bottleneck I have seen is converting the requirements into code fast enough to prove to the customer that they didn't give us the right/sufficient requirements. Up until recently, you had to avoid spending time on code if you thought the requirements were bad. Throwing away 2+ weeks of work on ambiguity is a terrible time. Today, you could hypothetically get lucky on a single prompt and be ~99% of the way there in one shot. Even if that other 1% sucks to clean up, imagine if it was enough to get the final polished requirements out of the customer. You could crap out an 80% prototype in the time it takes you to complete one daily standup call. Is the fact that it's only 80% there bad? I don't think so in this context. Handing a customer something that almost works is much more productive than fucking around with design documents and ensuring requirements are perfectly polished to developer preferences. A slightly wrong thing gets you the exact answer a lot faster than nothing at all. |
|
| ▲ | BigHatLogan 3 hours ago | parent | prev | next [-] |
| Good write-up. I don't disagree with any of his points, but does anybody here have practical suggestions on how to move forward and think about one's career? I've been a frontend (with a little full stack) for a few years now, and much of the modern landscape concerns me, specifically with how I should be positioning myself. I hear vague suggestions like "get better at the business domain" and other things like that. I'm not discounting any of that, but what does this actually mean or look like in your day-to-day life? I'm working at a mid-sized company right now. I use Cursor and some other tools, but I can't help but wonder if I'm still falling behind or doing something wrong. Does anybody have any thoughts or suggestions on this? The landscape and horizon just seems so foggy to me right now. |
| |
| ▲ | dclnbrght 10 minutes ago | parent | next [-] | | https://news.ycombinator.com/item?id=46197349 | |
| ▲ | colonCapitalDee 2 hours ago | parent | prev | next [-] | | Blind leading the blind, but my thinking is this: 1. Use the tools to their fullest extend, push boundaries and figure out what works and what doesn't 2. Be more than your tools As long as you + LLM is significantly more valuable than just an LLM, you'll be employed. I don't know how "practical" this advice is, because it's basically what you're already doing, but it's how I'm thinking about it. | | |
| ▲ | ares623 39 minutes ago | parent [-] | | Realistically, someone else + LLM at -10% compensation will be employed | | |
| ▲ | ubercow13 12 minutes ago | parent [-] | | Then why wasn't someone else employed at -10% compensation instead of you before LLMs? |
|
| |
| ▲ | martinald 2 hours ago | parent | prev | next [-] | | Author here, thanks for your kind words! I think it's about looking at what you're building and proactively suggesting/prototyping what else could be useful for the business. This does get tricky in large corps where things are often quite siloed, but can you think "one step ahead" of the product requirements and build that as well? I think regardless if you build it, it's a good exercise to run on any project - what would you think to build next, and what does the business actually want. If you are getting closer on those requests in your head then I think it's a positive sign you are understanding the domain. | | |
| ▲ | BigHatLogan 2 hours ago | parent [-] | | I think you're right about trying to stay one step ahead of product requirements. Maybe my issue here is that I'm looking for another "path" where one might not exist, at least not a concretely defined one. From childhood to now, things were set in front of me and I just sort of did them, but now it feels like we're entering a real fog of war. It would be helpful, as you suggest, to start shifting away from "I code based on concrete specs" to "I discover solutions for the business." Thanks for the reply (and for the original essay). It has given me a lot to chew on. |
| |
| ▲ | embedding-shape 2 hours ago | parent | prev | next [-] | | Don't chase specific technologies, especially not ones driven by for-profit companies. Chase ideas, become great in one slice of the industry, and the very least you can always fall back on that. Once established within a domain, you can always try to branch out, and feel a lot more comfortable doing so. Ultimately, software is for doing something, and that something can be a whole range of things. If you become really good at just a slice of that, things get a lot easier regardless of the general state of the industry. | | |
| ▲ | BigHatLogan 2 hours ago | parent [-] | | Thanks for the response. When you say "one slice of the industry", is the suggestion to understand the core business of whatever I'm building instead of being the "specs to code" person? I guess this is where the advice starts to become fuzzy and vague for me. |
| |
| ▲ | nick486 2 hours ago | parent | prev | next [-] | | Its always been foggy. Even without AI, you were always at risk of having your field disrupted by some tech you didn't see coming. AI will probably replace the bottom ~30-70%(depends who you ask) of dev jobs. Dont get caught in the dead zone when the bottom falls out. Exactly how we'll train good devs in the future, if we don't give them a financially stable environment environment to learn in while they're bad, is an open question. | |
| ▲ | MrPapz 2 hours ago | parent | prev | next [-] | | My suggestion would be to move to a higher level of abstraction, change the way which you view the system. Maybe becoming full stack? Maybe understanding the industry a little deeper? Maybe analyzing your company's competitors better? That would increase your value for the business (a bit of overlap with product management though). Assuming you can now deliver the expected tech part more easily, that's what I'd do. As for me, I've moved to a permanent product management position. | |
| ▲ | ronald_petty 2 hours ago | parent | prev | next [-] | | Great question, hard to quickly answer. My .02$. Show you can tackle harder problems. That includes knowing which problems matter. That happens with learning a "domain", versus just learning a tool (e.g. web development) in a domain. Change is scary, but thats because most aren't willing to change. Part of the "scare" is the fear of lost investment (e.g. pick wrong major or career). I can appreciate that, but with a little flexibility, that investment can be repurposed quicker today that in pre-2022 thanks to AI. AI is just another tool, treat it like a partner not a replacement. That can also include learning a domain. Ask AI how a given process works, its history, regulations, etc. Go confirm what it says. Have it break it down. We now can learn faster than ever before. Trust but verify. You are using Cursor, that shows a willingness to try new things. Now try to move faster than before, go deeper into the challenges. That is always going to be valued. | |
| ▲ | samdoesnothing 29 minutes ago | parent | prev | next [-] | | Also blind leading the blind here but I see two paths. 1) Specialize in product engineering, which means taking on more business responsibility. Maybe it means building your own products, or maybe it means trying to get yourself in a more customer-facing or managerial role? Im not very sure. Probably do this if you think AI will be replacing most programmers. 2) Specialize in hard programming problems that AI can't do. Frontend is probably most at risk, low level systems programming least at risk. Learn Rust or C/C++, or maybe backend (C#\Java\Go) if you don't want to transition all the way to low level systems stuff. That being said I don't think AI is really going to replace us anytime soon. | |
| ▲ | catigula 2 hours ago | parent | prev | next [-] | | Nobody knows the answer. Answers I see are typically "be a product manager" or "start your own business" which obviously 95% of developers can't/don't want to do. | |
| ▲ | isoprophlex 3 hours ago | parent | prev [-] | | Sheep farming sounds nice. Or making wooden furniture. Something physical. |
|
|
| ▲ | JohnMakin 2 hours ago | parent | prev | next [-] |
| This article mentions cost to ship, but ignores that the largest cost of any software project isn't consumed by how long it takes to get to market, but by maintenance and addition of new features. How is agentic coding doing there? I've only seen huge, unmaintainable messes so far. |
| |
| ▲ | p2detar 2 hours ago | parent | next [-] | | While this is true, I think some fields like game development may not always have this problem. If your goal is to release a non-upgradable game - fps, arcade, single-player titles, maintenance may be much less important than shipping. edit: typos | | |
| ▲ | liampulles an hour ago | parent [-] | | I think that is an applicable domain, but the problem is that every gamer I know who is not in the tech industry is vehemently opposed to AI. | | |
| ▲ | emodendroket 3 minutes ago | parent [-] | | Well, they just love complaining. You won't find many who profess to like DLC, yet that sells. |
|
| |
| ▲ | bdangubic 2 hours ago | parent | prev [-] | | one year in, AI slop > Human-written slop | | |
| ▲ | JohnMakin 2 hours ago | parent | next [-] | | I am highly skeptical of this claim. | | |
| ▲ | bdangubic 2 hours ago | parent [-] | | personal experience, not general claim. I am 30-years in the industry and have seen a lot of human-written code… | | |
| ▲ | martinald 2 hours ago | parent | next [-] | | Agreed. I think a core problem is many developers (on HN) don't realise how "bad" so much human written code is. I've seen unbelievably complex logistics logic coded in... WordPress templates and plugins to take a random example. Actually virtually impossible to figure out - but AI can actually extract all the logic pretty well now. | |
| ▲ | bdangubic 2 hours ago | parent | prev [-] | | there are many millions of people writing code… that’s way too many to get any good quality. you might get lucky and get involved with codebase which does not make you dizzy (or outright sick) but most of us are not that lucky |
|
| |
| ▲ | jimbokun 2 hours ago | parent | prev [-] | | Does this mean the AI slop is higher quality or that there's more of it? |
|
|
|
| ▲ | criemen 7 minutes ago | parent | prev | next [-] |
| > This takes a fairly large mindset shift, but the hard work is the conceptual thinking, not the typing. But the hard work always was the conceptual thinking? At least at and beyond the Senior level, for me it was always the thinking that's the hard work, not converting the thoughts into code. |
|
| ▲ | SoftTalker an hour ago | parent | prev | next [-] |
| Where are the billions of dollars spent on GPUs and new data centers accounted for in this estimation? |
| |
| ▲ | bdavid21wnec 26 minutes ago | parent [-] | | Ya completely agree, these companies will eventually push these costs to the consumer, might be in 1-2yrs, but it will eventually happen and though regulatory capture make it harder and harder to run local AI models because of “security” reasons. |
|
|
| ▲ | dclnbrght 12 minutes ago | parent | prev | next [-] |
| Domain knowledge is the moat, we need to rethink career planning
https://news.ycombinator.com/item?id=46197349 |
|
| ▲ | recursive 3 hours ago | parent | prev | next [-] |
| Did I miss something or is there actually no evidence provided that costs have dropped? |
| |
| ▲ | TheRoque 15 minutes ago | parent | next [-] | | Especially if you factor-in the fact that the AI companies are losing money for now, and that it's not sustainable. | |
| ▲ | isoprophlex 3 hours ago | parent | prev [-] | | Well... evidence, but there's obviously a graph with a line going places! |
|
|
| ▲ | paoaoaks 2 hours ago | parent | prev | next [-] |
| > written an entire unit/integration test suite in a few hours It’s often hard to ground how “good” blog writers are, but tidbits like this make it easy to disregard the author’s opinions. I’ve worked in many codebases where the test writers share the authors sentiment. They are awful and the tests are at best useless and often harmful. Getting to this point in your career without understanding how to write effective tests is a major red flag. |
| |
| ▲ | p1necone 2 hours ago | parent | next [-] | | I've used llms to help me write large sets of test cases, but it requires a lot of iteration and the mistakes it makes are both very common and insidious. Stuff like reimplementing large amounts of the code inside the tests because testing the actual code is "too hard", spending inordinate amounts of time covering every single edge case on some tiny bit of input processing unrelated to the main business logic, mocking out the code under test, changing failing tests to match obviously incorrect behavior... basically all the mistakes you expect to see totally green devs who don't understand the purpose of tests making. It saves a shitload of time setting up all the scaffolding and whatnot, but unless they very carefully reviewed and either manually edited or iterated a lot with the LLM I would be almost certain the tests were garbage given my experiences. (This is with fairly current models too btw - mostly sonnet 4 and 4.5, also in fairness to the LLM a shocking proportion of tests written by real people that I've read are also unhelpful garbage, I can't imagine the training data is of great quality) | |
| ▲ | IceDane 44 minutes ago | parent | prev [-] | | But we have 500% code coverage?!?! |
|
|
| ▲ | henning 3 minutes ago | parent | prev | next [-] |
| By making up numbers and not supplying any evidence, you can come to any conclusion you like! Then you get to draw a graph with no units on it. Finally, you can say things that are objectively false like "These assertions are rapidly becoming completely false". |
|
| ▲ | Agingcoder 30 minutes ago | parent | prev | next [-] |
| > One objection I hear a lot is that LLMs are only good at greenfield projects. I'd push back hard on this. I've spent plenty of time trying to understand 3-year-old+ codebases where everyone who wrote it has left. Where I am, 3 year old is greenfield, and old and large is 20 years old and has 8million lines of nasty c++.
I’ll have to wait a bit more I think … |
|
| ▲ | jdmoreira 2 hours ago | parent | prev | next [-] |
| I must be holding wrong then because I do use Claude Code all the time and I do think its quite impressive… still I cant see where the productivity gains go nor am I even sure they exist (they might, I just cant tell for sure!) |
| |
| ▲ | hurturue 7 minutes ago | parent [-] | | if you back and forth with the model, and discuss/approve every change it does, that's the problem. you need to give it a bigish thing so it can work 15 min on it. and in those 15 min you prepare the next one(s) | | |
| ▲ | jdmoreira 4 minutes ago | parent [-] | | Sure. But am I supposed to still understand that code at some point? Am I supposed to ask other team members to review and approve that code as if I had written it? I'm still trying to ship quality work by the same standards I had 3 or 5 years ago. |
|
|
|
| ▲ | tschellenbach 37 minutes ago | parent | prev | next [-] |
| It depends. For AI to work for large projects (did a post on this forever ago in AI terms. https://getstream.io/blog/cursor-ai-large-projects/) But you need: a staff level engineer to guide it, great standardization and testing best practices. And yes in that situation you can go 10-50x faster. Many teams/products are not in that environment though. |
| |
| ▲ | andybak 34 minutes ago | parent [-] | | I work on a big ball of open source spaghetti and AI has become invaluable in helping me navigate my way through it. Even when it's wrong - it gives me valuable clues. |
|
|
| ▲ | neilv 2 hours ago | parent | prev | next [-] |
| Copying GPL code, with global search&replace of the brand names, has always lowered the cost of software 'development' dramatically. |
| |
| ▲ | bdangubic 2 hours ago | parent [-] | | I would love to see where I can find a full test coverage to paste in for an internal too that I can search&replace on to get it working… |
|
|
| ▲ | bdavid21wnec 2 hours ago | parent | prev | next [-] |
| I keep seeing articles like these popup. I am in the industry but not in the “AI” industry.
What I have no concept of, is the current subsidized, VC funded, anywhere close to what the final product will be?
I always fall back to the Uber paradox. Yes it was great at first, now it’s 3x what it cost and has only given cabs pricing power. This was good for consumers to start but now it’s just another part of the k shaped economy.
So is that ultimately where AI goes? Top percent can afford a high monthly subscription and the not so fortunate get there free 5 minutes per month |
| |
| ▲ | martinald 2 hours ago | parent [-] | | But even if that did happen, the open source models are excellent and cost virtually nothing? Like I prefer Opus 4.5 and Gemini 3 to the open weights models, but if Anthropic or Google upped the pricing 10x then everyone would switch to the open weights models. Arguably you could say that the Chinese labs may stop releasing them, true, but even if all model development stopped today then they'd still be extremely useful and a decent competitor. | | |
| ▲ | bdavid21wnec 2 hours ago | parent [-] | | Again I’m not in the “AI” industry so I don’t fully understand the economics and don’t run open models locally. What’s the cost to run this stuff locally, what type of hardware is required. When you say virtually nothing, do you mean that’s because you already have a 2k laptop or gpu? Again I am only asking because I don’t know. Would these local models run OK on my 2016 Mac Pro intel or do I need to upgrade to the latest M4 chip with 32GB memory for it to work correctly? | | |
| ▲ | criemen 18 minutes ago | parent [-] | | The large open-weights models aren't really usable for local running (even with current hardware), but multiple providers compete on running inference for you, so it's reasonable to assume that there is and will be a functioning marketplace. |
|
|
|
|
| ▲ | an0malous 2 hours ago | parent | prev | next [-] |
| Then why is all my software slower, buggier, and with a worse UX? |
| |
|
| ▲ | rmnclmnt 14 minutes ago | parent | prev | next [-] |
| Can we also take into account the mental cost associates with building software? Because how I see it, managing output from agents is way more exhausting than doing it ourself. And obviously the cost of not upskilling in intricate technical details as much as before (aka staying at the high level perspective) will have to be paid at some point |
|
| ▲ | azov 2 hours ago | parent | prev | next [-] |
| If the cost of building software dropped so much - where is that software?.. Was there an explosion of useful features in any software product you use? A jump in quality? Anything tangible an end user can see?.. |
|
| ▲ | cloogshicer 23 minutes ago | parent | prev | next [-] |
| > I've had Claude Code write an entire unit/integration test suite in a few hours (300+ tests) I'd love to see someone do this, or a similar task, live on stream. I always feel like an idiot when I read things like this because despite using Claude Code a lot I've never been able to get anything of that magnitude out of it that wasn't slop/completely unusable, to the point where I started to question if I hadn't been faster writing everything by hand. Claiming that software is now 90% cheaper feels absurd to me and I'd love to understand better where this completely different worldview comes from. Am I using the tools incorrectly? Different domains/languages/ecosystems? |
|
| ▲ | e10jc 2 hours ago | parent | prev | next [-] |
| I totally agree with you. I am working on a new platform right now for a niche industry. Maybe theres $10m ARR to make total in the industry. Last year, it wouldn’t be worth the effort to raise, hire a PM, a few devs, QA, etc. But for a solo dev like myself with AI, it definitely is worth it now. |
|
| ▲ | MangoToupe 13 minutes ago | parent | prev | next [-] |
| No. Not unless your business wasn't competitive to begin with |
|
| ▲ | bitwize 17 minutes ago | parent | prev | next [-] |
| Maybe I'm holding it wrong, but I don't actually see the huge productivity gains from LLM-assisted software development. Work is leaning on us to use AI—not requiring it yet, but we're at DEFCON 3, borderline 2 (DEFCON 1 being a Shopify situation). My team's experience is that it needs LOTS of handholding and manual fixing to produce even something basic that's remotely fit for production use. I closed a comment from ~2.5y ago (https://news.ycombinator.com/item?id=36594800) with this sentence: "I'm not sure that incorporating LLMs into programming is (yet) not just an infinite generator of messes for humans to clean up." My experience with it is convincing me that that's just what it is. When the bills come due, the VC money dries up, and the AI providers start jacking up their prices... there's probably going to be a boom market for humans to clean up AI messes. |
|
| ▲ | nonameiguess 20 minutes ago | parent | prev | next [-] |
| I don't really build software any more and have moved into other parts of the business. But I'm still a huge user of software and I'd just echo all the other comments asking if it's so easy to get all these great tools built and shipped, where are they? I can see that YouTube is flooded with auto-generated content. I can see that blogspam has skyrocketed beyond belief. I can see that the number of phishing texts and voicemails I get every day has gone through the roof. I don't see any flood of new CNCF incubating projects. I don't see that holy grail entire OS comparable to Linux but written in Rust. I don't see the other holy grail new web browser that can compete with Firefox, Chrome, and Safari. It's possible people are shipping more of the stripped down Jira clones designed for a team of ten that gets 60 customers and stops receiving updates after 2 years but that's not the kind of software that would be visible to me. If you're replacing spreadsheets with a single-purpose web UI with proper access control and concurrent editing that doesn't need Sharepoint or Google Workspaces, fine, but if you're telling me that's going to revolutionize the entire industry and economy and justify trillions of dollars in new data centers, I don't think so. I think you need to actually compete with Sharepoint and Google Workspaces. Supposedly, Google and Microsoft claim to be using LLMs internally more than ever, but they're publicly traded companies. If it's having some huge impact, surely we'll see their margins skyrocket when they have no more labor costs, right? |
|
| ▲ | mwkaufma 3 hours ago | parent | prev | next [-] |
| Feels almost too on-the-nose to write "Betteridge's Law of Headlines" but the answer is obviously no. Look no further than the farce of their made-up "graph" of cost over time with no units or evidence. |
|
| ▲ | atmavatar an hour ago | parent | prev | next [-] |
| > Has the cost of building software just dropped 90%? I believe Betteridge's law of headlines [1] applies here: No. 1. https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline... |
|
| ▲ | sublinear 24 minutes ago | parent | prev | next [-] |
| Perhaps the cost will drop over time, but it will be because writing code is becoming more accessible. It's not just because of AI, but the natural progress of education and literacy on the topic that would have happened anyway. What I see are salaries stagnating and opportunity for new niche roles or roles being redefined to have more technical responsibility. Is this not the future we all expected before AI hype anyway? People need to relax and refocus on what matters. |
|
| ▲ | qwertyastronaut 2 hours ago | parent | prev | next [-] |
| I don’t know if it’s 90%, but I’m shipping in 2 days things that took 2-4 weeks before. Opus 4.5 in particular has been a profound shift. I’m not sure how software dev as a career survives this. I have nearly 0 reason to hire a developer for my company because I just write a spec and Claude does it in one shot. It’s honestly scary, and I hope my company doesn’t fail because as a developer I’m fucked. But… statistically my business will fail. I think in a few years there will only be a handful of software companies—the ones who already have control of distribution. Products can be cloned in a few weeks now; not long until it’s a few minutes. I used to see a new competitor once every six months. Now I see a new competitor every few hours. |
| |
| ▲ | throwaway31131 2 hours ago | parent | next [-] | | Just out of curiosity, what software product were you making in two weeks before using AI? Or maybe I’m misunderstanding your use of shipping. | |
| ▲ | martinald 2 hours ago | parent | prev | next [-] | | Agreed. Opus 4.5 does feel like a real shift and I have had exactly your experience. I've shipped stuff that would have taken me weeks in days. And really to a much higher quality standard - test suites would have been far smaller if I'd built manually. And probably everything in MVP Bootstrap CSS. | |
| ▲ | llmslave 2 hours ago | parent | prev | next [-] | | Yeah, I really think software engineering is over. Not right now, but Opus 4.5 is incredible, it wont be long before 5 and 5.5 are released. They wont automate everything, but the bar for being able to produce working software will plummet. | |
| ▲ | LPisGood 2 hours ago | parent | prev | next [-] | | I feel like I have have heard this exact statement about model FooBar X.Y about a half dozen times over the last couple of years. | |
| ▲ | IceDane 43 minutes ago | parent | prev | next [-] | | I'd love to hear what sort of software you are building. | |
| ▲ | bdangubic 2 hours ago | parent | prev [-] | | this is roughly same for me |
|
|
| ▲ | on_the_train 2 hours ago | parent | prev | next [-] |
| Ai saves me like an hour per month tops. I still don't understand the hype. It's a solution in search of a problem. It can't solve the hard coding problems. And it doesn't say when it can't solve the essay ones either. It's less valuable than resharper. So the business value is maybe $10 a month. That can't finance this industry. |
| |
| ▲ | averageRoyalty 2 hours ago | parent [-] | | I read these sort of comments every so often and I do not understand them. You are in a sea of people telling you that they are developing software much quicker which ticks the required boxes. I understand that for some reason this isn't the case for your work flow, but obviously it has a lot more value for others. If you are a chairmaker and everyone gains access to a machine that can spit out all the chair components but sometimes only spits out 3 legs or makes a mistake on the backs, you might find it pointless. Maybe it can't do all the nice artisan styles you can do. But you can be confident others will take advantage of this chair machine, work around the issues and drive the price down from $20 per chair to $2 per chair. In 24 months, you won't be able to sell enough of your chairs any more. | | |
| ▲ | throwaway31131 2 hours ago | parent | next [-] | | Maybe, or maybe the size of the chair market grows because with $2 chairs more buyers enter. The high end is roughly unaffected because they were never going to buy a low end chair. | |
| ▲ | on_the_train 2 hours ago | parent | prev [-] | | > You are in a sea of people telling you that they are developing software much quicker which ticks the required boxes But that's exactly not the case. Everyone is wondering what tf this is supposed to be for. People are vehemently against this tech, and yet it gets shoved down our throats although it's prohibitively expensive. Coding should be among the easiest problems to tackle, yet none of the big models can write basic "real" code. They break when things get more complex than pong. And they can't even write a single proper function with modern c++ templating stuff for example. | | |
| ▲ | Agingcoder 21 minutes ago | parent [-] | | They can actually - I thought they couldn’t , but the latest ones can, much to my surprise. I changed my mind after playing with cursor 2 ( cursor 1 had lasted all of 10 mins), which actually wrote a full blown app with documentation, tests , coverage, ci/cd, etc. I was able to have it find a bug I encountered when using the app - it literally ran the code, inserted extra logs, grepped the logs , found the bug and fixed it. |
|
|
|
|
| ▲ | andrewstuart 3 hours ago | parent | prev | next [-] |
| If you’re quicker then competition heats up management wants more done, efficiencies are soon forgotten and new expectations and baselines set. |
| |
| ▲ | foxglacier 2 hours ago | parent [-] | | Sure but that's the good of it. Lower labor cost = more productivity. The customer wins in the end because the equivalent product is cheaper or a better product costs the same. Businesses and employees still have to compete against each other so things won't get easier for them in the long term. | | |
| ▲ | Draiken 19 minutes ago | parent [-] | | Except this is capitalism, so any improvements will go disproportionately to the owners. This narrative of improvements for customers has been repeated for decades and it keeps being wrong. More stock buybacks and subscriptions! |
|
|
|
| ▲ | more_corn 3 hours ago | parent | prev | next [-] |
| Ask someone who builds software for a fee. Are you able to do 90% more? Fire 9/10 engineers? Produce 90% faster? No, no, and no. |
| |
| ▲ | recursive 3 hours ago | parent | next [-] | | It's even worse. If the cost drops by 90%, the corresponding productivity increase should be 900%, not 90%. | |
| ▲ | bdangubic 2 hours ago | parent | prev [-] | | 90% more - nope. 35-55% more, just about on average. I am 30-year in though, not sure what these numbers are for junior devs | | |
| ▲ | nottorp 2 hours ago | parent [-] | | https://arstechnica.com/ai/2025/07/study-finds-ai-tools-made... They thought they were faster too. Yes, I know, AGI is just around the corner, we just need to wait more because "agents" are improving every day. In the mean time, LLMs are kinda useful instead of web searches, mostly because search is both full of spam and the search providers are toxic. | | |
| ▲ | bdangubic an hour ago | parent [-] | | I am just talking about personal point-of-view, wasn’t interviewed by Arstechnica or others that live off clickbait |
|
|
|
|
| ▲ | zackmorris an hour ago | parent | prev [-] |
| *90% so far.. I've only been working with AI for a couple of months, but IMHO it's over. The Internet Age which ran 30 years from roughly 1995-2025 has ended and we've entered the AI Age (maybe the last age). I know people with little programming experience who have already passed me in productivity, and I've been doing this since the 80s. And that trend is only going to accelerate and intensify. The main point that people are having a hard time seeing, probably due to denial, is that once problem solving is solved at any level with AI, then it's solved at all levels. We're lost in the details of LLMs, NNs, etc, but not seeing the big picture. That if AI can work through a todo list, then it can write a todo list. It can check if a todo list is done. It can work recursively at any level of the problem solving hierarchy and in parallel. It can come up with new ideas creatively with stable diffusion. It can learn and it can teach. And most importantly, it can evolve. Based on the context I have before me, I predict that at the end of 2026 (coinciding with the election) America and probably the world will enter a massive recession, likely bigger than the Housing Bubble popping. Definitely bigger than the Dot Bomb. Where too many bad decisions compounded for too many decades converge to throw away most of the quality of life gains that humanity has made since WWII, forcing us to start over. I'll just call it the Great Dumbpression. If something like UBI is the eventual goal for humankind, or soft versions of that such as democratic socialism, it's on the other side of a bottleneck. One where 1000 billionaires and a few trillionaires effectively own the world, while everyone else scratches out a subsistence income under neofeudalism. One where as much food gets thrown away as what the world consumes, and a billion people go hungry. One where some people have more than they could use in countless lifetimes, including the option to cheat death, while everyone else faces their own mortality. "AI was the answer to Earth's problems" could be the opening line of a novel. But I've heard this story too many times. In those stories, the next 10 years don't go as planned. Once we enter the Singularity and the rate of technological progress goes exponential, it becomes impossible to predict the future. Meaning that a lot of fringe and unthinkable timelines become highly likely. It's basically the Great Filter in the Drake equation and Fermi paradox. This is a little hard for me to come to terms with after a lifetime of little or no progress in the areas of tech that I care about. I remember in the late 90s when people were talking about AI and couldn't find a use for it, so it had no funding. The best they could come up with was predicting the stock market, auditing, genetics, stuff like that. Who knew that AI would take off because of self-help, adult material and parody? But I guess we should have known. Every other form of information technology followed those trends. Because of that lack of real tech as labor-saving devices to help us get real work done, there's been an explosion of phantom tech that increases our burden through distraction and makes our work/life balance even less healthy as underemployment. This is why AI will inevitably be recruited to demand an increase in productivity from us for the same income, not decrease our share of the workload. What keeps me going is that I've always been wrong about the future. Maybe one of those timelines sees a great democratization of tech, where even the poorest people have access to free problem solving tech that allows them to build assistants that increase their leverage enough to escape poverty without money. In effect making (late-stage) capitalism irrelevent. If the rate of increasing equity is faster than the rate of increasing excess, then we have a small window of time to catch up before we enter a Long Now of suffering, where wealth inequality approaches an asymptote making life performative, pageantry for the masses who must please an emperor with no clothes. In a recent interview with Mel Robbins in episode 715 of Real Time, Bill Maher said "my book would be called: It's Not Gonna Be That" about the future not being what we think it is. I can't find a video, but he describes it starting around the 19:00 mark: https://podcasts.musixmatch.com/podcast/real-time-with-bill-... Our best hope for the future is that we're wrong about it. |
| |
| ▲ | keybored 14 minutes ago | parent [-] | | It’s over. I’ve been writing FUD manually since the 60’s. But people fresh out of FB troll boot camp are already rolling it out 99% faster than me. |
|