| ▲ | verdverm 10 hours ago |
| It is clear that AI had already transformed how we do our jobs in CS The genie is out of the bottle, never going back It's a fantasy to think it will "dry up" and go away Some other guarantees over the next few years we can make based on history: AI will get batter, faster, and more efficient like everything else in CS |
|
| ▲ | tartoran 6 hours ago | parent | next [-] |
| Yes, the genie is out of the bottle but could get back right in when it starts costing more, a whole lot more. I'm sure there's an amount of money for a monthly subscription that you'd either scale back your use or consider other alternatives. LLM as technology is indeed out of the bottle and here to stay but the current business around it is is not quite clear. |
| |
| ▲ | verdverm 5 hours ago | parent | next [-] | | I've pondered that point, using my monthly car payment and usage as a barometer. I currently spend %5 on Ai compared to my car, I get far more value out of Ai | |
| ▲ | CamperBob2 an hour ago | parent | prev [-] | | Yes, the genie is out of the bottle but could get back right in when it starts costing more, a whole lot more. Local models are already good enough to handle some meaningful programming work, and they run very well on an expensive-but-not-unattainable PC. You could cheat your way through an undergrad CS curriculum with Qwen 80b, certainly, including most liberal-arts requirements. The genie is not going back in the bottle no matter what happens, short of a nuclear war. There is no point even treating the possibility hypothetically. |
|
|
| ▲ | oblio 10 hours ago | parent | prev | next [-] |
| Yeah, like Windows in 2026 is better than Windows in 2010, Gmail in 2026 is better than Gmail in 2010, the average website in 2026 is better than in 2015, Uber is better in 2026 than in 2015, etc. Plenty of tech becomes exploitative (or more exploitative). I don't know if you noticed but 80% of LLM improvements are actually procedural now: it's the software around them improving, not the core LLMs. Plus LLMs have huge potential for being exploitative. 10x what Google Search could do for ads. |
| |
| ▲ | verdverm 8 hours ago | parent [-] | | You're crossing products with technology, also some cherry picking of personal perspectives I personally think GSuite is much better today than it was a decade ago, but that is separate The underlying hardware has improved, the network, the security, the provenance Specific to LLMs 1. we have seen rapid improvements and there are a ton more you can see in the research that will be impacting the next round of model train/release cycle. Both algorithms and hardware are improving 2. Open weight models are within spitting distance of the frontier. Within 2 years, smaller and open models will be capable of what frontier is doing today. This has a huge democratization potential I'd rather see the Ai as an opportunity to break the Oligarchy and the corporate hold over the people. I'm working hard to make it a reality (also working on atproto) | | |
| ▲ | oblio 8 hours ago | parent [-] | | Every time I hear "democratization" from a techbro I keep thinking that the end state is technofeudalism. We can't fix social problems with technological solutions. Every scalable solution takes us closer to Extremistan, which is inherently anti democratic. Read the Black Swan by Taleb. | | |
| ▲ | verdverm 5 hours ago | parent [-] | | Jumping from someone using a word to assigning a pejoritve label to them is by definition a form of bigotry Democratization, the way I'm using it without all the bias, is simply most people having access to build with a tool or a technology. Would you also argue everyone having access to the printing press is a bad thing? The internet? Right to repair? Right to compute? Why should we consider Ai access differently? |
|
|
|
|
| ▲ | danaris 10 hours ago | parent | prev [-] |
| OK? Prove it. Show me actual studies that clearly demonstrate that not only does using an LLM code assistant help make code faster in the short term, it doesn't waste all that extra benefit by being that much harder to maintain in the long term. |
| |
| ▲ | jjav 8 hours ago | parent | next [-] | | No such studies can exist since AI coding has not been around for a long term. Clearly AI is much faster and good enough to create new one-off bits of code. Like I tend to create small helper scripts for all kinds of things both at work and home all the time. Typically these would take me 2-4 hours and aside from a few tweaks early on, they receive no maintenance as they just do some one simple thing. Now with AI coding these take me just a few minutes, done. But I believe this is the optimal productivity sweet spot for AI coding, as no maintenance is needed. I've also been running a couple experiments vibe-coding larger apps over the span of months and while initial ramp-up is very fast, productivity starts to drop off after a few weeks as the code becomes more complex and ever more full of special case exceptions that a human wouldn't have done that way. So I spend more and more time correcting behavior and writing test cases to root out insanity in the code. How will this go for code bases which need to continuously evolve and mature over many years and decades? I guess we'll see. | |
| ▲ | shiroiuma 4 hours ago | parent | prev | next [-] | | >it doesn't waste all that extra benefit by being that much harder to maintain in the long term. If AI just generates piles of unmaintainable code, this isn't going to be any worse than most of the professionally-written (by humans) code I've had to work with over my career. In my experience, readable and maintainable code is unfortunately rather uncommon. | |
| ▲ | verdverm 9 hours ago | parent | prev [-] | | I'll be frank, tried this with a few other people recently and they 1. Open this line of debate similar to you (i.e. the way you ask, the tone you use) 2. Were not interested in actual debate 3. Moved the goalposts repeatedly Based on past experience entertaining inquisitors, I will not be this time. | | |
| ▲ | libraryofbabel 8 hours ago | parent [-] | | Yeah. At this point, at the start of 2026, people that are taking these sorts of positions with this sort of tone tend to have their identity wrapped up in wanting AI to fail or go away. That’s not conducive to a reasoned discussion. There are a whole range of interesting questions here that it’s possible to have a nuanced discussion about, without falling into AI hype and while maintaining a skeptical attitude. But you have to do it from a place of curiosity rather than starting with hatred of the technology and wishing for it to be somehow proved useless and fade away. Because that’s not going to happen now, even if the current investment bubble pops. | | |
| ▲ | verdverm 8 hours ago | parent [-] | | wholehearted agreement If anything, I see this moment as one where we can unshackle ourselves from the oligarchs and corporate overlords. The two technologies are AI and ATProto, I work on both now to give sovereignty back to we the people | | |
| ▲ | somebehemoth 6 hours ago | parent [-] | | > I see this moment as one where we can unshackle ourselves from the oligarchs and corporate overlords. For me, modern AI appears to be controlled entirely by oligarchs and corporate overlords already. Some of them are the same who already shackled us. This time will not be different, in my opinion. I like your optimism. | | |
|
|
|
|