| ▲ | danaris 9 hours ago |
| > It is so clear that AI tools will be (and are already) a big part of future jobs for CS majors now, both in industry and academia. No, it's not. Nothing around AI past the next few months to a year is clear right now. It's very, very possible that within the next year or two, the bottom falls out of the market for mainstream/commercial LLM services, and then all the Copilot and Claude Code and similar services are going to dry up and blow away. Naturally, that doesn't mean that no one will be using LLMs for coding, given the number of people who have reported their productivity increasing—but it means there won't be a guarantee that, for instance, VS Code will have a first-party integrated solution for it, and that's a must-have for many larger coding shops. None of that is certain, of course! That's the whole point: we don't know what's coming. |
|
| ▲ | verdverm 9 hours ago | parent | next [-] |
| It is clear that AI had already transformed how we do our jobs in CS The genie is out of the bottle, never going back It's a fantasy to think it will "dry up" and go away Some other guarantees over the next few years we can make based on history: AI will get batter, faster, and more efficient like everything else in CS |
| |
| ▲ | tartoran 5 hours ago | parent | next [-] | | Yes, the genie is out of the bottle but could get back right in when it starts costing more, a whole lot more. I'm sure there's an amount of money for a monthly subscription that you'd either scale back your use or consider other alternatives. LLM as technology is indeed out of the bottle and here to stay but the current business around it is is not quite clear. | | |
| ▲ | verdverm 4 hours ago | parent [-] | | I've pondered that point, using my monthly car payment and usage as a barometer. I currently spend %5 on Ai compared to my car, I get far more value out of Ai |
| |
| ▲ | oblio 9 hours ago | parent | prev | next [-] | | Yeah, like Windows in 2026 is better than Windows in 2010, Gmail in 2026 is better than Gmail in 2010, the average website in 2026 is better than in 2015, Uber is better in 2026 than in 2015, etc. Plenty of tech becomes exploitative (or more exploitative). I don't know if you noticed but 80% of LLM improvements are actually procedural now: it's the software around them improving, not the core LLMs. Plus LLMs have huge potential for being exploitative. 10x what Google Search could do for ads. | | |
| ▲ | verdverm 7 hours ago | parent [-] | | You're crossing products with technology, also some cherry picking of personal perspectives I personally think GSuite is much better today than it was a decade ago, but that is separate The underlying hardware has improved, the network, the security, the provenance Specific to LLMs 1. we have seen rapid improvements and there are a ton more you can see in the research that will be impacting the next round of model train/release cycle. Both algorithms and hardware are improving 2. Open weight models are within spitting distance of the frontier. Within 2 years, smaller and open models will be capable of what frontier is doing today. This has a huge democratization potential I'd rather see the Ai as an opportunity to break the Oligarchy and the corporate hold over the people. I'm working hard to make it a reality (also working on atproto) | | |
| ▲ | oblio 6 hours ago | parent [-] | | Every time I hear "democratization" from a techbro I keep thinking that the end state is technofeudalism. We can't fix social problems with technological solutions. Every scalable solution takes us closer to Extremistan, which is inherently anti democratic. Read the Black Swan by Taleb. | | |
| ▲ | verdverm 3 hours ago | parent [-] | | Jumping from someone using a word to assigning a pejoritve label to them is by definition a form of bigotry Democratization, the way I'm using it without all the bias, is simply most people having access to build with a tool or a technology. Would you also argue everyone having access to the printing press is a bad thing? The internet? Right to repair? Right to compute? Why should we consider Ai access differently? |
|
|
| |
| ▲ | danaris 8 hours ago | parent | prev [-] | | OK? Prove it. Show me actual studies that clearly demonstrate that not only does using an LLM code assistant help make code faster in the short term, it doesn't waste all that extra benefit by being that much harder to maintain in the long term. | | |
| ▲ | jjav 6 hours ago | parent | next [-] | | No such studies can exist since AI coding has not been around for a long term. Clearly AI is much faster and good enough to create new one-off bits of code. Like I tend to create small helper scripts for all kinds of things both at work and home all the time. Typically these would take me 2-4 hours and aside from a few tweaks early on, they receive no maintenance as they just do some one simple thing. Now with AI coding these take me just a few minutes, done. But I believe this is the optimal productivity sweet spot for AI coding, as no maintenance is needed. I've also been running a couple experiments vibe-coding larger apps over the span of months and while initial ramp-up is very fast, productivity starts to drop off after a few weeks as the code becomes more complex and ever more full of special case exceptions that a human wouldn't have done that way. So I spend more and more time correcting behavior and writing test cases to root out insanity in the code. How will this go for code bases which need to continuously evolve and mature over many years and decades? I guess we'll see. | |
| ▲ | shiroiuma 2 hours ago | parent | prev | next [-] | | >it doesn't waste all that extra benefit by being that much harder to maintain in the long term. If AI just generates piles of unmaintainable code, this isn't going to be any worse than most of the professionally-written (by humans) code I've had to work with over my career. In my experience, readable and maintainable code is unfortunately rather uncommon. | |
| ▲ | verdverm 7 hours ago | parent | prev [-] | | I'll be frank, tried this with a few other people recently and they 1. Open this line of debate similar to you (i.e. the way you ask, the tone you use) 2. Were not interested in actual debate 3. Moved the goalposts repeatedly Based on past experience entertaining inquisitors, I will not be this time. | | |
| ▲ | libraryofbabel 6 hours ago | parent [-] | | Yeah. At this point, at the start of 2026, people that are taking these sorts of positions with this sort of tone tend to have their identity wrapped up in wanting AI to fail or go away. That’s not conducive to a reasoned discussion. There are a whole range of interesting questions here that it’s possible to have a nuanced discussion about, without falling into AI hype and while maintaining a skeptical attitude. But you have to do it from a place of curiosity rather than starting with hatred of the technology and wishing for it to be somehow proved useless and fade away. Because that’s not going to happen now, even if the current investment bubble pops. | | |
| ▲ | verdverm 6 hours ago | parent [-] | | wholehearted agreement If anything, I see this moment as one where we can unshackle ourselves from the oligarchs and corporate overlords. The two technologies are AI and ATProto, I work on both now to give sovereignty back to we the people | | |
| ▲ | somebehemoth 5 hours ago | parent [-] | | > I see this moment as one where we can unshackle ourselves from the oligarchs and corporate overlords. For me, modern AI appears to be controlled entirely by oligarchs and corporate overlords already. Some of them are the same who already shackled us. This time will not be different, in my opinion. I like your optimism. |
|
|
|
|
|
|
| ▲ | cirrusfan 8 hours ago | parent | prev | next [-] |
| I get a slow-but-usable ~10tk/s on kimi 2.5 2b-ish quant on a high end gaming slash low end workstation desktop (rtx 4090, 256 gb ram, ryzen 7950). Right now the price of RAM is silly but when I built it it was similar in price to a high end macbook - which is to say it isn’t cheap but it’s available to just about everybody in western countries. The quality is of course worse than what the bleeding edge labs offer, especially since heavy quants are particularly bad for coding, but it is good enough for many tasks: an intelligent duck that helps with planning, generating bog standard boilerplate, google-less interactive search/stackoverflow ("I ran flamegraph and X is an issue, what are my options here?” etc). My point is, I can get somewhat-useful ai model running at slow-but-usable speed on a random desktop I had lying around since 2024. Barring nuclear war there’s just no way that AI won’t be at least _somewhat_ beneficial to the average dev. All the AI companies could vanish tomorrow and you’d still have a bunch of inference-as-a-service shops appearing in places where electricity is borderline free, like Straya when the sun is out. |
| |
| ▲ | danaris 7 hours ago | parent [-] | | Then you're missing my point. Yes, you, a hobbyist, can make that work, and keep being useful for the foreseeable future. I don't doubt that. But either a majority or large plurality of programmers work in some kind of large institution where they don't have full control over the tools they use. Some percentage of those will never even be allowed to use LLM coding tools, because they're not working in tech and their bosses are in the portion of the non-tech public that thinks "AI" is scary, rather than the portion that thinks it's magic. (Or, their bosses have actually done some research, and don't want to risk handing their internal code over to LLMs to train on—whether they're actually doing that now or not, the chances that they won't in future approach nil.) And even those who might not be outright forbidden to use such tools for specific reasons like the above will never be able to get authorization to use them on their company workstations, because they're not approved tools, because they require a subscription the company won't pay for, because etc etc. So saying that clearly coding with LLM assistance is the future and it would be irresponsible not to teach current CS students how to code like that is patently false. It is a possible future, but the volatility in the AI space right now is much, much too high to be able to predict just what the future will bring. | | |
| ▲ | blackcatsec 6 hours ago | parent [-] | | I never understand anyone's push to throw around AI slop coding everywhere. Do they think in the back of their heads that this means coding jobs are going to come back on-shore? Because AI is going to make up for the savings? No, what it means is tech bro CEOs are going to replace you even more and replace at least a portion of the off-shore folks that they're paying. The promise of AI is a capitalist's dream, which is why it's being pushed so much. Do more with less investment. But the reality of AI coding is significantly more nuanced, and particularly more nuanced in spaces outside of the SRE/devops space. I highly doubt you could realistically use AI to code the majority of significant software products (like, say, an entire operating system). You might be able to use AI to add additional functionality you otherwise couldn't have, but that's not really what the capitalists desire. Not to mention, the models have to be continually trained, otherwise the knowledge is going to be dead. Is AI as useful for Rust as it is for Python? Doubtful. What about the programming languages created 10-15 years from now? What about when everyone starts hoarding their information away from the prying eyes of AI scraper bots to keep competitive knowledge in-house? Both from a user perspective and a business perspective? Lots of variability here that literally nobody has any idea how any of it's going to go. |
|
|
|
| ▲ | libraryofbabel 9 hours ago | parent | prev [-] |
| I agree with you that everything is changing and that we don’t know what’s coming, but I think you really have to stretch things to imagine that it’s a likely scenario that AI-assisted coding will “dry up and blow away.” You’ll need to elaborate on that, because I don’t think it’s likely even if the AI investment bubble pops. Remember that inference is not really that expensive. Or do you think that things shift on the demand side somehow? |
| |
| ▲ | saltcured 7 hours ago | parent | next [-] | | I think the "genie" that is out of the bottle is that there is no broad, deeply technical class who can resist the allure of the AI agent. A technical focus does not seem to provide immunity. In spite of obvious contradictory signals about quality, we embrace the magical thinking that these tools operate in a realm of ontology and logic. We disregard the null hypothesis, in which they are more mad-libbing plagiarism machines which we've deployed against our own minds. Put more tritely: We have met the Genie, and the Genie is Us. The LLM is just another wish fulfilled with calamitous second-order effects. Though enjoyable as fiction, I can't really picture a Butlerian Jihad where humanity attempts some religious purge of AI methods. It's easier for me to imagine the opposite, where the majority purges the heretics who would question their saints of reduced effort. So, I don't see LLMs going away unless you believe we're in some kind of Peak Compute transition, which is pretty catastrophic thinking. I.e. some kind of techno/industrial/societal collapse where the state of the art stops moving forward and instead retreats. I suppose someone could believe in that outcome, if they lean hard into the idea that the continued use of LLMs will incapacitate us? Even if LLM/AI concepts plateau, I tend to think we'll somehow continue with hardware scaling. That means they will become commoditized and able to run locally on consumer-level equipment. In the long run, it won't require a financial bubble or dedicated powerplants to run, nor be limited to priests in high towers. It will be pervasive like wireless ear buds or microwave ovens, rather than an embodiment of capital investment. The pragmatic way I see LLMs _not_ sticking around is where AI researchers figure out some better approach. Then, LLMs would simply be left behind as historical curiosities. | | |
| ▲ | danaris 7 hours ago | parent [-] | | The first half of your post, I broadly agree with. The last part...I'm not sure. The idea that we will be able to compute-scale our way out of practically anything is so much taken for granted these days that many people seem to have lost sight of the fact that we have genuinely hit diminishing returns—first in the general-purpose computing scaling (end of Moore's Law, etc), and more recently in the ability to scale LLMs. There is no longer a guarantee that we can improve the performance of training, at the very least, for the larger models by more than a few percent, no matter how much new tech we throw at it. At least until we hit another major breakthrough (either hardware or software), and by their very nature those cannot be counted on. Even if we can squeeze out a few more percent—or a few more tens of percent—of optimizations on training and inference, to the best of my understanding, that's going to be orders of magnitude too little yet to allow for running the full-size major models on consumer-level equipment. | | |
| ▲ | cheevly 5 hours ago | parent [-] | | This is so objectively false. Sometimes I can’t believe im even on HN anymore with the level of confidently incorrect assertions made. | | |
| ▲ | danaris 5 hours ago | parent [-] | | You, uh, wanna actually back that accusation up with some data there, chief? | | |
| ▲ | cheevly 3 hours ago | parent [-] | | Compare models from one year ago (GPT-4o?) to models from this year (Opus 4.5?). There are literally hundreds of benchmarks and metrics you can find. What reality do you live in? |
|
|
|
| |
| ▲ | danaris 9 hours ago | parent | prev | next [-] | | I think that even if inference is "not really that expensive", it's not free. I think that Microsoft will not be willing to operate Copilot for free in perpetuity. I think that there has not yet been any meaningful large-scale study showing that it improves performance overall, and there have been some studies showing that it does the opposite, despite individuals' feeling that it helps them. I think that a lot of the hype around AI is that it is going to get better, and if it becomes prohibitively expensive for it to do that (ie, training), and there's no proof that it's helping, and keeping the subscriptions going is a constant money drain, and there's no more drumbeat of "everything must become AI immediately and forever", more and more institutions are going to start dropping it. I think that if the only programmers who are using LLMs to aid their coding are hobbyists, independent contractors, or in small shops where they get to fully dictate their own setups, that's a small enough segment of the programming market that we can say it won't help students to learn that way, because they won't be allowed to code that way in a "real job". | |
| ▲ | LtWorf 9 hours ago | parent | prev | next [-] | | If they start charging what it costs them for example… | | |
| ▲ | libraryofbabel 8 hours ago | parent [-] | | There is so much confusion on this topic. Please don't spread more of it; the answers are just a quick google away. To spell it out: 1) AI companies make money on the tokens they sell through their APIs. At my company we run Claude Code by buying Claude Sonnet and Opus tokens from AWS Bedrock. AWS and Anthropic make money on those tokens. The unit economics are very good here; estimates are that Anthropic and OpenAI have a gross margin of 40% on selling tokens. 2) Claude Code subscriptions are probably subsidized somewhat on a per token basis, for strategic reasons (Anthropic wants to capture the market). Although even this is complicated, as the usage distribution is such that Anthropic is making money on some subscribers and then subsidizing the ultra-heavy-usage vibe coders who max out their subscriptions. If they lowered the cap, most people with subscriptions would still not max out and they could start making money, but they'd probably upset a lot of the loudest ultra-heavy-usage influencer-types. 3) The biggest cost AI companies have is training new models. That is the reason AI companies are not net profitable. But that's a completely separate set of questions from what inference costs, which is what matters here. | | |
| ▲ | somewhereoutth 5 hours ago | parent [-] | | without training new models, existing models will become more and more out of date, until they are no longer useful - regardless of how cheap inference is. Training new models is part of the cost basis, and can't be hand waved away. | | |
| ▲ | SgtBastard 32 minutes ago | parent [-] | | Only if you’re relying upon the models to recall facts from its training set - intuitively, at sufficient complexity, models ability to reason is what is critical and can have its answers kept up to date with RAG. Unless you mean out of date == no longer SOTA reasoning models? |
|
|
| |
| ▲ | somewhereoutth 5 hours ago | parent | prev [-] | | LLMs will stop being trained, as that enormous upfront investment will have been found to not produce the required return. People will continue to use the existing models for inference, not least as the (now bankrupt) LLM labs attempt to squeeze the last juice out of their remaining assets (trained LLMs). However these models will become more and more outdated, less and less useful, until they are not worth the electricity to do the inference anymore. Thus it will end. |
|