| ▲ | stiiv 3 days ago |
| > If this tech is as amazing as you say it is, I'll be able to pick it up and become productive on a timescale of my choosing not yours. Broadly speaking, I think this is a wise assessment. There are opportunities for productivity gains right now, but it I don't think it's a knockout for anyone using the tech, and I think that onboarding might be challenging for some people in the tech's current state. It is safe to assume that the tech will continue to improve in both ways: productivity gains will increase, onboarding will get easier. I think it will also become easier to choose a particular suite of products to use too. Waiting is not a bad idea. |
|
| ▲ | augusto-moura 3 days ago | parent | next [-] |
| What I get a bit annoyed is companies forcing AI tools, getting usage metrics and actively hunting the engineers that don't use the tool "enough", I've never seen anything like it for a technically optional tool. Even in the past, aside from technical limitaions, you were not required to use enough of a tool. It just sounds like a giant scheme to burn through tokens and give money to the AI corps, and tech directors are falling for it immediately. |
| |
| ▲ | HolyLampshade 3 days ago | parent | next [-] | | > I've never seen anything like it for a technically optional tool Cloud had a very similar vibe when it was really running advertising to CIO/CTOs hard. Everything had to be jammed into the cloud, even if it made absolutely no sense for it to be run there. This seems to come pretty frequently from visionless tech execs. They need to justify their existence to their boss, and thus try to show how innovative and/or cost cutting they can be. | | |
| ▲ | aquariusDue 3 days ago | parent | next [-] | | That and microservices in lieu of a monolith. Or how about being the odd one out a few years ago when suggesting a MPA instead of a SPA when it made sense. I like to think where at the point of everybody is rebuilding their portfolio website with Angular 1 but this time it's Claude Code and a SaaS instead. | |
| ▲ | rurp 3 days ago | parent | prev | next [-] | | I agree with you but man the absurdity of aggressively pushing cloud or AI adoption as a cost cutting move is off the charts. | | |
| ▲ | HolyLampshade 3 days ago | parent [-] | | Easier to justify monthly costs than big capital asks (even if your infra is depreciated at a normal rate) is where I think many saw (incorrectly) cost savings. It’s also a bit of these execs mortgaging the future, banking on either being out of their role when the real cost comes due or that people will have incredibly short memories (not a wild assumption). |
| |
| ▲ | genthree 3 days ago | parent | prev | next [-] | | I think this is the result of a c-suite that has never actually done the work of the businesses they're running. The MBAificiation of management. Of course they constantly do brain-dead shit, they literally don't have a clue how anything actually works in "their" own business. | | |
| ▲ | stackbutterflow 3 days ago | parent | next [-] | | They've been vibe-driving businesses long before we've started vibe-coding software. | | |
| ▲ | gopher_space 3 days ago | parent [-] | | There weren't really any failure states for the ZIRP "lifestyle CEO". If you remember old black and white movies about pigeons from psych 101 it's been that level of conditioning for how many years now? If your CEO doesn't look like a taxi dispatcher he's just moving his wings around waiting for a food pellet. |
| |
| ▲ | HolyLampshade 3 days ago | parent | prev [-] | | I was trying to think of a way to word this exact argument. I think it’s especially easy when your business technology is not your primary means of revenue generation. Having these execs understand how things work is significantly less critical in these scenarios, so it becomes much easier to hire for alternative characteristics (golf game, pedigree, gender, whatever). |
| |
| ▲ | bee_rider 3 days ago | parent | prev | next [-] | | “Cloud” seems like a better comparison than stuff like cryptocurrency. AI seems totally over-hyped but with some obvious sensible use-cases. | |
| ▲ | _doctor_love 3 days ago | parent | prev [-] | | > Cloud had a very similar vibe when it was really running advertising to CIO/CTOs hard. Everything had to be jammed into the cloud, even if it made absolutely no sense for it to be run there. 100% accurate - some of us are old enough to have lived through a few of the mini-revolutions in between the mega-revolutions of Internet/Web in the 1990s and now AI/LLM in the 2020s. We are in the "stupid phase" of adoption still. C-level people have to follow the herd and they are being evaluated on keeping up with everyone else. Idiotic mandates are a way to cause things to happen short-term even though everyone knows long-term it will have to be re-done. Consultants gonna make a looooooooot of money this coming decade. | | |
| ▲ | johnnyanmac 2 days ago | parent [-] | | I think the feel bad moment here is how impersonal everything got in 3 decades. During the dotcom bust you still had to meet and talk to people to get interviews started. Now you can make a perfectly tailored resume, apply to 50 jobs in a day, and it's not unexpected to not get any response from those in 2 weeks. You don't know if it's your resume, the company, or the economy. And no one wants to admit the latter two are problems. Not to mention the utter disrespect these days. There's no decorum in many of these "professional" settings, when normally you want your interview process to show off your best face. | | |
| ▲ | oro44 2 days ago | parent [-] | | "And no one wants to admit the latter two are problems." Im working on building something to address this. That's all I'll say lol. |
|
|
| |
| ▲ | hibikir 3 days ago | parent | prev | next [-] | | It's using a bad tool to try to aim at something reasonable-ish: Developers not taking advantage of the tools in places where it's very easy to get use out of them. I have coworkers like that: One spent 3 days researching a bug that Claude found in 10 minutes by pointing it at the logs in the time window and the codebase. And he didn't even find the bug, when Claude nailed it in one. But is this something that is best done top to bottom, with a big report, counting tokens? Hell no. This is something that is better found, and tackled at the team level. But execs in many places like easy, visible metrics, whether they are actually helping or not. And that's how you find people playing JIRA games and such. My worse example was a VP has decided that looking at the burndown charts from each team under them, and using their shape as a reasonable metric is a good idea. It's all natural signs of a total lack of trust, and thinking you can solve all of this from the top. | | |
| ▲ | sarchertech 3 days ago | parent | next [-] | | The thing is we’ve always had people who spend more time on their tooling or learn different tools and perform better. I’ve seen people use notepad and I’ve seen people who are so good at vim that they look like they’re on editing code directly with their mind. Your particular example is extreme and my guess is the coworker is just not great at debugging. I use Claude all the time for finding bugs, but it fails fairly frequently though. I think there’s probably advantage to having some people who don’t use it that often, so you have someone to turn to when it fails. I’m definitely not exercising my debugging skills as much as I used to and I’m fairly confident they’ve atrophied. | | |
| ▲ | toraway 3 days ago | parent [-] | | That, and an objective comparison measuring time saving should include all time that went into learning, configuring, maintaining the tool. And ideally a sample large enough to capture any wasted time from dead ends in other tasks where the tool may actually fail to solve the problem. I’ve definitely lost a couple hours here and there from when it felt like I was right on the verge of CC fixing something but never actually got there and finally had to just do it myself anyway. |
| |
| ▲ | johnnyanmac 2 days ago | parent | prev | next [-] | | > But execs in many places like easy, visible metrics, whether they are actually helping or not. Most execs didn't get where they were by being truly helpful and adding value to the company. They played the game long enough to know that politics trumps accomplishments. The rest from there is the ability to weave a good story (be it slightly or completely exaggerated). It's not even about trust. It's about incentives in a structure that is dog-eat-dog. Rugged individualism in a corporate structure is a self defeating prophecy. But it's inevitable when executives extract from the company instead of rising the tides for all ships. And shareholders reward it. | |
| ▲ | patrick451 2 days ago | parent | prev [-] | | There are countless other stories about the AI's spouting complete bullshit. This easily wastes as much time as they save. |
| |
| ▲ | jacobsenscott 3 days ago | parent | prev | next [-] | | > t just sounds like a giant scheme to burn through tokens and give money to the AI corps, and tech directors are falling for it immediately. This is exactly what's happening. The top 5 or 6 companies in the s&p 500 are running a very sophisticated marketing/pressure campaign to convince every c-suite down stream that they need to force AI on their entire organization or die. It's working great. CEOs don't get fired for following the herd. | | |
| ▲ | toomuchtodo 3 days ago | parent | next [-] | | ~40-50% of the S&P500 rely on this continuing. S&P 500 Concentration Approaching 50% - https://news.ycombinator.com/item?id=47384002 - March 2026 > No of course there isn't enough capital for all of this. Having said that, there is enough capital to do this for a at least a little while longer. -- Gil Luria (Managing Director and Analyst at D.A. Davidson) OpenAI Needs a Trillion Dollars in the Next Four Years - https://news.ycombinator.com/item?id=45394071 - September 2025 (8 comments) | | |
| ▲ | karmakurtisaani 3 days ago | parent [-] | | Elon Musk is planning to put his AI company into the SpaceX IPO, and accelerate getting it into the major indices, effectively making pension funds, banks and individual investors his bag holders. Patric Boyle has a video on this in case you care for the details. |
| |
| ▲ | fuzzfactor 3 days ago | parent | prev [-] | | I guess it's possible for the top companies to have spent so much already, that now the best move is to convince the next tier to do the same otherwise those competitiors may pull ahead without such a financial handicap. |
| |
| ▲ | bondarchuk 3 days ago | parent | prev | next [-] | | >I've never seen anything like it for a technically optional tool If you broaden the comparison (only a little bit) it looks suspiciously like employees being forced to train their own replacement (be that other employees, or factory automation), a regular occurrence. | | |
| ▲ | somenameforme 3 days ago | parent [-] | | Yeah this is the thing I think many don't want to see. Imagine a bunch of farm laborers being trained to use a tractor/reaper early on its development. Certainly they'd think it's cool and convenient, because it is. But if it works out, then most of those farm laborers are now obsolete, and a handful of them can now replace the rest. And indeed this is why agricultural employment went from the majority of jobs to a footnote. |
| |
| ▲ | PedroBatista 3 days ago | parent | prev | next [-] | | Tech directors, CEOs, managers, etc tend to be people with a certain personality and ( learned behaviors/thinking ) just like "technical people". Yes, they tend to be incredible gullible to certain things, over-simplistic and over-confident but also very "agile" when it comes to sweep their failures under the rug and move on to keep their own neck in one piece. At this point in time even the median CEO knows AI has been way overhyped and they over invested to a point of absolute financial insanity. The first line of defense about the pressure to deliver is to mandate their minions to use it as much as possible. We spent a fortune on this over-rated Michelin star reservation, and now you kids are going to absolutely enjoy it, like it or not goddammit! | |
| ▲ | jimmyjazz14 3 days ago | parent | prev | next [-] | | Yeah, I found this strange as well, if the tech is so amazing why do developers need to be forced to use it? | | |
| ▲ | dash2 3 days ago | parent | next [-] | | Maybe there's a positive externality: your individual learning percolates to others and benefits the firm as a whole. | | |
| ▲ | whoknowsidont 3 days ago | parent [-] | | What is there to learn? If anything developers are still the one's training and enhancing the models by giving them more feedback cycles and what works and what doesn't. |
| |
| ▲ | debatem1 3 days ago | parent | prev [-] | | I'm encouraging my folks to try it pretty hard because A) I've personally seen the productivity gains and B) using it is at first deeply weird/uncomfortable. Sometimes you've got to convince people to push through that kind of thing. | | |
| ▲ | toomuchtodo 3 days ago | parent [-] | | How you objectively measuring success? 93% of Developers Use AI Coding Tools. Productivity Hasn't Moved. - https://philippdubach.com/posts/93-of-developers-use-ai-codi... - March 4th, 2026 | | |
| ▲ | nightski 3 days ago | parent [-] | | They measured 16 developers and called it a "study"? That is amusing. Not to mention it was conducted almost a year ago, the tools have already changed dramatically. | | |
| ▲ | sarchertech 3 days ago | parent | next [-] | | Depending on the effect size a sample size of 16 can be plenty. | |
| ▲ | archagon 3 days ago | parent | prev | next [-] | | > Not to mention it was conducted almost a year ago, the tools have already changed dramatically. There is no point at which this argument will not be made. Therefore, it is a useless argument. | |
| ▲ | notlenin 2 days ago | parent | prev | next [-] | | > Not to mention it was conducted almost a year ago false. The article is from 4th of March 2026, less than a month ago. | | |
| ▲ | mkl 2 days ago | parent [-] | | From the first sentence of the article proper: "A study published in July 2025". |
| |
| ▲ | tehjoker 3 days ago | parent | prev [-] | | So just run a new study this year. I do think the tools have improved, but it should show up empirically. The only people for whom the urgency of "right now" is present is for the C-suite and investor class who are fighting to make sure they survive, but it might also be a crisis of their own making. Don't confuse your identity as a worker with the identity of the capitalist class. | | |
| ▲ | jmalicki 3 days ago | parent [-] | | You should be able to just develop software on your cellphone, right? Do you have an empirical study to support that your employer should buy you a laptop and possibly a monitor or two to help your productivity? If there's no study, why should we believe it? It's like "A study found that parachutes were no more effective than empty backpacks at protecting jumpers from aircraft." https://www.npr.org/sections/health-shots/2018/12/22/6790830... | | |
| ▲ | SpicyLemonZest 2 days ago | parent | next [-] | | I think my employer should buy me a laptop and possibly a monitor or two to help my productivity because I subjectively feel they'd be helpful, and I have the market power to insist on tools that I subjectively feel are helpful. If my CEO announced that monitors are super important and everyone will be tracked on monitor space usage going forwards, I would still want to see evidence that this is going to accomplish something. | | |
| ▲ | jmalicki 2 days ago | parent [-] | | Your CEO likewise subjectively feels all of their employees using AI will be helpful, and has the market power to insist that their employees use them. When engineers demand evidence that AI is productive, but not that having laptops and monitors are productive, it screams confirmation bias. "I'm right, you're wrong" as a default prior. | | |
| ▲ | SpicyLemonZest 2 days ago | parent | next [-] | | I wouldn't call it confirmation bias, but you're right that is my prior. If an executive and a line worker disagree about whether a tool is useful, I assume unless presented with evidence to the contrary that the executive is wrong. I would emphasize that I don't think there's anything particularly wrong with the converse either. If an executive is just absolutely convinced that dual monitors are a scam and nobody needs more than their laptop screen, they can run their company that way, and I'm sure there are many successful companies with that philosophy. | |
| ▲ | archagon 2 days ago | parent | prev [-] | | Sounds like it would be pretty productive for employees to unionize and replace their CEO with an LLM. |
|
| |
| ▲ | roarcher 2 days ago | parent | prev | next [-] | | > It's like "A study found that parachutes were no more effective than empty backpacks at protecting jumpers from aircraft." Are you under the impression that we don't bother to empirically prove things that seem obvious, like the safety benefits of parachutes? You don't think parachute manufacturers test their designs and quantify their performance? | | |
| ▲ | jmalicki 2 days ago | parent [-] | | There are no randomized controlled trials that parachutes save lives. This is repeatedly used as an example in the medical community about the limits of randomized controlled trials. This isn't some impression - your impression that such evidence exists is wrong. There might be some parachute company tests about effective of velocity, etc., but there are no human trials. Why? Because that would be unethical. | | |
| ▲ | roarcher 2 days ago | parent [-] | | > There are no randomized controlled trials that parachutes save lives. It's a good thing "randomized controlled trials" aren't the only kind of empirical evidence, then. We know the limits of how fast a human can safely land. Parachute manufactures have to prove that their designs meet the minimum performance specifications to achieve a safe speed. This proof is not invalidated by the fact that it doesn't include throwing some poor bastard with a placebo parachute out of an airplane to demonstrate that he dies on impact. Also, the answer to your original question is yes. There are numerous studies showing that multiple monitors improve productivity. |
|
| |
| ▲ | Copernicron 2 days ago | parent | prev [-] | | > Oh, there's one important detail here. The drop in the study was about 2 feet total, because the biplane and helicopter were parked. I don't think that's making the argument you think it is. | | |
|
|
|
|
|
| |
| ▲ | luisgvv 3 days ago | parent | prev | next [-] | | I'm just using Copilot CLI for mindless stuff and set it to the premium models to meet the quota, as long as they can't see the prompts I think I should be fine | | |
| ▲ | johntash 10 hours ago | parent | next [-] | | Not sure about copilot, but most enterprise plans do offer a way to export all prompts to a company siem. | |
| ▲ | 3 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | butlike 3 days ago | parent | prev | next [-] | | You're not going to get fired. Don't worry about it :) | | | |
| ▲ | EdwardDiego 2 days ago | parent | prev [-] | | Same, but with Opus 4.6. |
| |
| ▲ | afpx 3 days ago | parent | prev | next [-] | | It's really insane what is happening. My wife manages 70 software developers. Her boss mandated that managers replace 50% of the staff with AI within a year. And, she's scrambling trying to figure out if any of the tools actually work and annoying her team because she keeps pushing AI on them. Unsurprisingly it's only slowed things down and put her in a terrible position. | | |
| ▲ | throwaway29130 3 days ago | parent | next [-] | | Brutal. But probably all too common. One of my clients has very suddenly gone all-in on agentic AI and they're in this crazy hurry. (Probably the most annoying part is they want to automate stuff that I built a POC for using GPT-4o, two years ago - at the time they saw no use for it, but now they're all-in on the hype.) This started literally two weeks ago and a couple of days ago I talked to one of the admin people who wanted an update on the progress I'd made with sanding off some of the rough edges of the very rough implementation that the managing partner had put in place (he bought a Mac Mini, put OpenClaw on it, then gave it admin access to a whole pile of stuff!) I said I needed a couple more days. "Okay," she said, "but I need this quickly, because we're firing people next week." They have literally gone from no agentic AI, to discovering OpenClaw, to firing people, in a two-week time span. When economists say that the predicted job losses as a result of AI have not yet shown up in the data, I'm genuinely befuddled. Either we don't have long to wait to start seeing them, or there's something wrong with the data, because you can't tell me what I just described above is an isolated phenomenon. I also have to say: I've always enjoyed working with this client, but this experience has been a huge turnoff on a number of different levels. | | |
| ▲ | genthree 3 days ago | parent | next [-] | | For a non-tech case of this, my wife worked at a place that fired like 80% of their writers in anticipation of huge speed-ups they expected from LLMs, a couple years ago. They had to hire a bunch of them back less than two months later. The speed-ups were approximately nil and making the editors edit AI slop all day long had them all close to quitting. They didn't even wait to see if there were any actual benefits, they just blindly fired a bunch of people based on marketing lies. I can only assume they're the same sorts who fall for Nigerian Prince scams. | |
| ▲ | justin66 2 days ago | parent | prev | next [-] | | > Probably the most annoying part is they want to automate stuff that I built a POC for using GPT-4o, two years ago - at the time they saw no use for it, but now they're all-in on the hype. I’d have guessed the most annoying part would be that you’re assisting them in a hare brained scheme to terminate some people’s employment. | |
| ▲ | oro44 2 days ago | parent | prev [-] | | [flagged] | | |
| ▲ | johnnyanmac 2 days ago | parent [-] | | I don't even think it's stupidity. It's simple greed and an extreme case of Goodhart's lawsl. Shareholders want to hear AI, so CEOs will burn the rest of the company to satisfy that. The company doesn't matter; they will get paid handsomely for destroying it. Shareholders only care about short term gains, CEOs have no skin in the game, everyone else under wants to keep their job. None of these are aligned towards "make the nest proudct and satisfy customers". | | |
| ▲ | oro44 2 days ago | parent [-] | | The counter point to this is Apple who have not invested barely a dime in LLMs. Their stock price has not been crushed at all, quite the opposite in fact. They focus on the good stuff. Perhaps that's the luxury of living off the vision and leadership of someone who died many years ago. Personally I believe the stock market is incredibly, incredibly shaky. Investors are now in full-fear mode, it doesn't matter what news Nvidia etc print - if customers of OAI and others, are not seeing a meaningful INCREMENTAL increase in revenue generation or increase in cost-reduction (aside from white-washing it with lay-offs from insane hiring in the past). RE. stupidity - it is stupidity for the most part. Without the stupidity in quantity of demand, there is no market for LLMs from enterprise et al. Wanna know how stupid it is? Someone I know who works at Blackrock as a portfolio manager pretty high up is all of a sudden being forced to learn how to code and use LLMs to code. Yes you heard me right - this behaviour is expanding out of the software engineering profession. Its absolutely nuts. | | |
| ▲ | johnnyanmac 2 days ago | parent [-] | | Yeah, Apple always seems to go its own way. I wonder if it truly is a matter of a strong visionary (be it Cook or Jobs's legacy being upheld) or if shareholders simply come in with different mentality. Nintendo also has similar vibes. I see shareholder calls asking about AI usage and their answers come down to something like "we're not ruling it out, but we'll only use it when a situation presents itself". They tend to be pretty good at pushing back against their shareholders. Having a proper war chest instead of constantly funding on debt probably helps. > it is stupidity for the most part. Without the stupidity in quantity of demand, there is no market for LLMs from enterprise et al. Stupidity implies incompetence and lack of intent. Greed is incredibly intentional. There's always a bit of stupidity with greed (we even call such an algorithmic approach the "greedy method" after all), but I think they are important distinctions. I'll admit your blackrock example is plain stupidity, though. I know part of the end-goal is for "idea guys" to be able to make their ideas without pesky employees, but I don't think too many really think they can achieve that today. |
|
|
|
| |
| ▲ | graemep 3 days ago | parent | prev | next [-] | | Maybe what they really want her to do is get rid of 50% of her staff and the AI is just an excuse? In that case she should focus on "who can we do without?" rather than "how can we replace people with AI?" | | |
| ▲ | johnnyanmac 2 days ago | parent [-] | | I'm sure part of this mandatw implied "if you can't show us the numbers we want, you're part of the 50%". And the incentives are set. |
| |
| ▲ | komali2 3 days ago | parent | prev | next [-] | | > Her boss mandated that managers replace 50% of the staff with AI within a year I bet we could replace nearly all the CEOs in the country with chatgpt controlling a ceo@thatcompany.com email and nobody would notice. | | |
| ▲ | bikelang 3 days ago | parent [-] | | We’d probably get better outcomes too. | | |
| ▲ | plagiarist 3 days ago | parent [-] | | For society, yeah, since the AI training corpus is more normal people than sociopaths. Shareholders would be mad, I bet. | | |
| ▲ | theandrewbailey 2 days ago | parent [-] | | > Shareholders would be mad, I bet. But think of how much profits will improve by not paying $tens of millions to employ a CEO! |
|
|
| |
| ▲ | leftytak 2 days ago | parent | prev | next [-] | | The assumption that those managers have is that it’s easy to replace tech guys because AI is advancing, and a crap like that nonsense. Funny enough, I got laid off last month, yes I’m a tech guy, now they apparently regret it because they are now scrambling to find a replacement to do the tech tasks! TBH, I’m happy I got laid off because I’m finally building something I wanted to use. | |
| ▲ | PedroBatista 3 days ago | parent | prev [-] | | They perks and dread of middle management... |
| |
| ▲ | layer8 3 days ago | parent | prev | next [-] | | > I've never seen anything like it for a technically optional tool. It has often been the case for technologies though, like “now we’re doing everything in $language and $technology”. If you see LLM coding as a technology in that vein, it’s not a completely new phenomenon, although it does affect developers differently. | | |
| ▲ | augusto-moura 3 days ago | parent | next [-] | | Language and technology are normal, but we are talking about a metered code editor, nobody have asked me to "use X hours of IntellijIDEA" in the past, or "use git enough or be fired". Tools are never required to be used when they are not needed | |
| ▲ | kjkjadksj 3 days ago | parent | prev [-] | | Well the tech or the language had some feature to it that lead you to using it. By definition LLM coding doesn’t. It is like the job requirement turned into “ask jeff to write all your code and if you don’t we won’t hire you.” | | |
| ▲ | layer8 3 days ago | parent [-] | | Technologies were imposed by management whether it made sense or not. Like, “all data exchange formats now have to use XML”, or “all applications must be J2EE now”, because it was the new hot thing. “You” weren’t making that choice, management imposed it. That’s the parallel I’m drawing. | | |
| ▲ | sarchertech 3 days ago | parent [-] | | That was usually only the case where everyone had to be using it or it didn’t work. One person hand coding and one person having Claude still results in the same output that is compatible with each other. This is more like mandating that you use vim. I’ve never some something like that before in 20+ years. | | |
| ▲ | layer8 3 days ago | parent [-] | | > That was usually only the case where everyone had to be using it or it didn’t work. Absolutely not, a lot was done just because it was pushed as the current fashion and advertised to be solving problems that either weren’t applicable to the concrete use case or that it didn’t actually solve. | | |
| ▲ | sarchertech 2 days ago | parent [-] | | You’re not understanding what I’m saying. If you as a company want to do OOP, having one guy writing everything in imperative style hinders everyone else because their code has to interact with his. AI isn’t like this because the final output is the same as hand coding. |
|
|
|
|
| |
| ▲ | khriss 3 days ago | parent | prev | next [-] | | This is largely due to the age old fact that corporations rarely make decisions based on actual data, introspection, and good judgment. Usually the decision is made first and then the justifications are invented afterwards. In this case, every executive is terrified of being "left out" in the AI race. As we saw with the mass layoffs across companies, most of CEO decision making is just adhering to herd behavior. So it is literally better for execs to have shoveled a shit ton of money into 'strategic' AI initiatives and have them fail than potentially deal with the potentially remote chance of some other exec or company succeeding with 'AI enabled transformation'. What makes it even more fun is that nobody really has a good understanding of how to measure the ROI of AI. Hence we have people burning a lot of money due to FOMO and no easy way of measuring the outcome, which is usually how the foundations for good Ponzi schemes are laid. This is unlikely to end well. However, as usual, it's us, the common plebs, who will suffer regardless of outcome. | |
| ▲ | jfreds 2 days ago | parent | prev | next [-] | | I am seeing this at my work right now. They are about to start using token consumption as _part_ of the performance review process. Obviously this is a coarse and problematic proxy for productivity. OTOH, it’s an attempt to address a real problem. There are people who are in fact falling behind (I’m talking literally editing code in notepad), and we can either let them get PIPped eventually, or try to bring them along. There is a real “activation energy” required to learning new tools, and some people need an excuse/permission. Not saying that token count is a GOOD signal, but I haven’t heard many better ideas | |
| ▲ | genthree 3 days ago | parent | prev | next [-] | | We're doing that in my office, forced Cursor use. A good chunk of the "edited by AI" lines in my history were just auto-completing about the same as a traditional intellisense-alike would do (and actually Cursor doesn't seem to supply that, which is frequently annoying and wastes my time, in particular when I need to make sure it hasn't hallucinated a method or property on an object it should be able to "see" the definition of, which it does constantly; IDK maybe there's a setting somewhere, but I don't have to fiddle with settings in vanilla VSCode to get that...) It's actually kinda useful in some cases, but the UI is terrible and it needs to integrate much better with existing tools that are superior to it for specific purposes, before I'll be happy using it. I'd say the productivity gains are a wash, for me, so far. Plus it's entirely too memory-hungry, I'd just come to accept that a text editor takes a couple GB now (SIGH), and here it comes taking way more than that. | |
| ▲ | abkolan 2 days ago | parent | prev | next [-] | | A certain YC company fired a few employees for not using AI, the CEO bragged about it on X and incidentally it was a crypto company. | |
| ▲ | whateveracct 3 days ago | parent | prev | next [-] | | Yes it's very weird - why is my CEO being so nosy about my text editor all of a sudden? Stay in your lane, buddy. | |
| ▲ | zephen 3 days ago | parent | prev | next [-] | | I don't doubt you, but I'm out of the loop. Who does this? | | |
| ▲ | adelie 3 days ago | parent | next [-] | | my company (mid-size, publicly traded) is mandating [x] hours spent on AI per week. i have no idea how they're planning on measuring this, and as far as i can tell, neither does management. suppose it's better than counting lines of code, though. | |
| ▲ | forgetfulness 3 days ago | parent | prev [-] | | My uncle leads IT support teams, the org is measuring AI use in writing reports and tickets. The org has very poorly structured and obsolete processes (he's trying to straighten them up as he goes), AI will probably amplify the lack of structure, by making it easier for the work to _look_ as if someone carefully reviewed the issues and followed procedure. A friend is a team lead in an org that's mandating vibecoding via "Devin", a lesser known player an "architect" chose after shallow review. The company also has endemic process issues and simply can't do deployments reliably, it's behind the times in methodology in every other respect. Higher ups are placing their trust in a B-list agentic tool instead of fixing the problems. Anyway, I wouldn't be caught dead working at either of those two shops even before the AI rollout, but this is what's going on in the IT underworld. | | |
| ▲ | genthree 3 days ago | parent | next [-] | | I hate the AI assistants for ticket-writing. The beneficial use there would be to prompt for possibly-useful information that's not present, or call out ambiguity and let the writer decide how to resolve any of that. Coaching, basically. Suggesting actual text to include, for people who aren't already excellent at ticket-writing, just leads to noisier tickets that take more work to understand ("did they really mean this, or did the LLM just prompt them to include it and they thought sure, I guess that's good?") [EDIT] Oh and much of your post rings true for my org. They operate at a fraction the speed they could because of organizational dysfunction and failure to use what's already available to them as far as processes and tech, but are rushing toward LLMs, LOL. Yeah, guys, the slowness has nothing to do with how fast code is written, and I'm suuuuure you'll do a great job of integrating those tools effectively when you're failing at the basics.... | | |
| ▲ | forgetfulness 3 days ago | parent [-] | | Lots of organizations don't want to accept that their velocity issues are quality issues. It's often a view held by an old guard that was there when the business experienced growth by adding features, while not having to bear any maintenance burden. The people who remain are either also oblivious to this, or simply have stopped caring. LLM-generated code hits all the right notes, it's done fast, in great volumes, and it even features what the naysayers were asking for. Each PR has 20 pages of documentation and adds some bulk to the stuff in the tests folder, that can sit there looking pretty. How wonderful! Hell, you can even do now that "code review" that some nerd was always complaining about, just ask the bot to review it and hit that merge button. Then you ask the bot to generate the commands again for the deploy (what CI pipeline?) and bam! New features customers will love. And maybe data corruption. | | |
| ▲ | oro44 a day ago | parent [-] | | I dont get the craziness at all. A firm that is led by people who can envision, very clearly, revenue-generating and cost-reduction projects - wins. Writing code by hand is absolutely irrelevant. Who fucking cares. The former is what matters. Code generation acceleration only matters when those pre-requisites are met. How did Apple go from the verge of bankruptcy to where it is today? All Im seeing is most people are not smart at all - no wonder they are so impressed by LLMs! They can't think straight. I only see this become even worse over-time. Perhaps this is the stated goal. |
|
| |
| ▲ | fuzzfactor 3 days ago | parent | prev [-] | | Well before offices were computerized at all some of the manual processes turned out to be more effective than after full computerization was completely accomplished. Which was sometimes decades later so nobody could tell which workflows it actually applied to, or wouldn't believe it anyway by the 21st century. It was truly quite rare to have such well-honed manual processes though, the "average" place had a lot of elements that were far from perfect but still benefited after the computerization dust had settled. Then at the opposite end of the spectrum were companies where everything was an absolute shitshow, maybe since the beginning. That's kind of where Conway's Law comes from, if you benchmark against a manual shitshow that has built up over the years, and replace it with a computerized version, you get a shitshow on steroids. The only other choice would have been to spend the appropriate number of years manually undoing the shitshow before making any really bold moves. Now AI can really take things to a whole 'nother level, not just on steroids but possibly violating Conway's Law . . . squared. |
|
| |
| ▲ | diehunde 3 days ago | parent | prev | next [-] | | As an employee of a big tech company doing this, it's all fear mongering. We are being told that if everyone doesn't use these tools, our competitors will wipe the floor with us because they are using them and will ship features 10x faster. But many engineers are suspicious as well. | | |
| ▲ | sarchertech 3 days ago | parent | next [-] | | I’m at a big tech company top and essentially no one is a true believer more than 3 or 4 levels down from the top. We’re all just trying to keep our use metrics high enough to not get noticed. But for those top layers, I’ve never seen so much FOMO in all my life. We’re a very slow moving company but they act like we’ve got 2 weeks to go “AI first” or we’re dead in the water. I’ve never seen such a successful hype cycle. I’m pretty sure it’s the bots that are accelerating it so far behind a normal hype cycle. | | |
| ▲ | AnimalMuppet 2 days ago | parent [-] | | Maybe AI is really good for vibe-coding bots that amplify FOMO? | | |
| ▲ | sarchertech 2 days ago | parent [-] | | It’s really good at spitting out prose that is frequently good enough to pass as human and bypass spam filters. |
|
| |
| ▲ | oro44 2 days ago | parent | prev | next [-] | | Hook line and sinker.. Right so you are going to be left behind whilst the ground keeps shifting under you, given the models are non-deterministic and continuously changing? There was a big rush of prompt engineers. Where are they now? Nobody even referse to 'prompt engineering' anymore. The best thing to do is wait for steady-state. Whats going on is insane... a slow implosion of the code base. | | |
| ▲ | johnnyanmac 2 days ago | parent [-] | | Sadly, the quote on markets and solvency rings true here. Tech (among an increasing number of sector) is being hit hard by layoffs over this. Nothing is steady right now. |
| |
| ▲ | zrail 3 days ago | parent | prev [-] | | It's baffling, to be honest. I'm at a fintech that is currently pushing very hard at this, but in the same breath talking about how we're not a pure software play. I just don't understand where they're coming from. | | |
| |
| ▲ | MichaelRo a day ago | parent | prev | next [-] | | >> It just sounds like a giant scheme to burn through tokens and give money to the AI corps, and tech directors are falling for it immediately. Exactly this: "Jensen Huang says he would be 'deeply alarmed' if his $500,000 engineer did not consume at least $250,000 of tokens" : https://www.businessinsider.com/jensen-huang-500k-engineers-... | |
| ▲ | mh- 3 days ago | parent | prev | next [-] | | The only thing I've mandated for engineers is that folks give it a try occasionally, as models, best practices, and tooling improves. I'm currently tracking exactly two numeric metrics: total MAUs (to track the aforementioned), and total DAUs (to gauge adoption and rightsize seat-licensed contracts.) | | |
| ▲ | jrjeksjd8d 3 days ago | parent | next [-] | | Why do you care so much? If these are really revolutionary tools that vastly optimize work, why bother forcing people to "try new models and best practices"? If the benefit is there people will use it or get left behind, there's no sense having a mandate that people resentfully try the new tooling. Imagine you had a developer who writes Java using vim. It sounds insane but they are just as productive as everyone else. Then you mandate they have to try IntelliJ every quarter, just to see if maybe they like it now. You're just going to piss them off and reduce their productivity by mandating their workflow. FWIW in the face of these kind of mandates I have been using tokens but ignoring the output. So it's costing my employer money and they have a warped metric of whether the tool is actually useful. | | |
| ▲ | GetTheFacts 3 days ago | parent | next [-] | | >If these are really revolutionary tools that vastly optimize work, why bother forcing people to "try new models and best practices"? "If the colleges were better, if they really had it, you would need to
get the police at the gates to keep order in the inrushing multitude.
See in college how we thwart the natural love of learning by leaving
the natural method of teaching what each wishes to learn, and insisting
that you shall learn what you have no taste or capacity for. The
college, which should be a place of delightful labor, is made odious
and unhealthy, and the young men are tempted to frivolous amusements to
rally their jaded spirits. I would have the studies elective.
Scholarship is to be created not by compulsion, but by awakening a pure
interest in knowledge. The wise instructor accomplishes this by
opening to his pupils precisely the attractions the study has for
himself. The marking is a system for schools, not for the college; for
boys, not for men; and it is an ungracious work to put on a professor." -- Ralph Waldo Emerson | |
| ▲ | ianm218 3 days ago | parent | prev | next [-] | | > Why do you care so much? If these are really revolutionary tools that vastly optimize work, why bother forcing people to "try new models and best practices"? If AI makes an employee 10X more productive they get a slight pay raise maybe, but the company makes substantially more money or gets substantially more output. So there is a large difference in incentives. | | |
| ▲ | mh- 3 days ago | parent [-] | | This is true, though I believe savvy employees have leverage to ensure they participate in a larger share of that upside. As you can see from other comments, lots of people will just drag their heels and not give it a good-faith attempt, so it'll often average out in the way you predict. | | |
| ▲ | xantronix 3 days ago | parent [-] | | Are you budgeting time to allow people to properly evaluate LLMs and possibly struggle with them? This is not the sort of new tool whose utility is universally immediately obvious to all builders and craftsmen out there. Are you willing to pay down the likely debt of some individual contributors never clicking with this, or being outright resentful to towards the technology or the mandates? There is a LOT of self-selecting bias from LLM proponents assuming everybody else is willing or able to travel the same path as them. | | |
| ▲ | mh- 3 days ago | parent [-] | | > Are you budgeting time to allow people to properly evaluate LLMs and possibly struggle with them? Great question. That is absolutely the goal. My take is that building with LLMs - at least with the current popular harnesses like Claude Code - is a skill on its own, and people need time to develop that skill and also to figure out where these tools might fit into their workflows. > Are you willing to pay down the likely debt of some individual contributors never clicking with this or being outright resentful to towards the technology or the mandates? I'll be honest as I have been elsewhere in the thread: A few years from now, I don't know what the state of the technology or its adoption will be, or what expectations of software engineers at large will be. But for the foreseeable future, yes, absolutely, I'm willing to give engineers the time and space to develop familiarity and comfort with the tools, as long as they're engaging in good faith. edit: oops, didn't mean to dodge the last part of your question (re: resentment): I genuinely don't know the answer to how I'll handle that, but I'm also sure it'll happen. Hopefully I'll still be in a position to speak publicly about how one can deal with those challenges. edit 2: also, thank you for the thoughtful questions and dialogue. |
|
|
| |
| ▲ | mh- 3 days ago | parent | prev [-] | | > FWIW in the face of these kind of mandates I have been using tokens but ignoring the output. So it's costing my employer money and they have a warped metric of whether the tool is actually useful. What you're actually doing here, from my POV, is incentivizing your employer to use more invasive metrics when they tried to stay hands-off and mandate the absolute bare minimum of "uh, give it a shot and see if you think it's useful right now." The analytics that Claude Enterprise exposes are far more intrusive than I would want to be subjected to as an engineer, so I rolled out a compromise. I don't even track who the active users are, currently. But maybe you're right, and there are enough people sabotaging the metrics out of spite, that there's a reason they provide the other data. I hope that the engineers in my org are more mature than that, and would be willing to just say "I'm not currently using it", but thanks for giving me something to think about. | | |
| ▲ | ryandrake 3 days ago | parent | next [-] | | > mandate the absolute bare minimum of "uh, give it a shot and see if you think it's useful right now." That’s not the bare minimum, though. The bare minimum is: “if you are meeting or exceeding your job expectations, great work, keep using the tools that are working for you.” To a productive employee, merely saying “just try out AI, it might help” feels like the boss saying “just try out astrology or visit a psychic for a reading. You might find it interesting.” | |
| ▲ | jrjeksjd8d 3 days ago | parent | prev | next [-] | | When the CEO, CTO and Director are all saying "everyone has to use AI" I think it's pretty naive to think people will speak out openly. The bare minimum would be making the tools available and letting people do their jobs. Go ahead and spend more time collecting more granular metrics spying on your employees. Apparently there aren't more valuable things for you to do than micromanaging individual developers. | |
| ▲ | kaffekaka 3 days ago | parent | prev | next [-] | | I think one side of the issues folks are having is that combined with the mandate to use these tools, there is also an expectation or assumption that the developers will instantly get X% more productive. Like, "you must use this tool and you will be twice as productive". Where I work there as certainly been that kind of discussions, "we need to use AI for this, because no offense but you are simply not fast enough". And this from people who do not understand software development and has never worked with it. They have only read the online stuff about 20X speeds and FOMO. (And my workplace is generally quite laid back and reasonable. I am sure many other places are much more aggressively steered.) | |
| ▲ | sarchertech 3 days ago | parent | prev [-] | | >more invasive metrics If you have accurate metrics to gauge developer productivity then use them. But you don’t because if you did you’d be a billionaire. What you have is metrics that can measure developer busyness. If you use those metrics all you’ll do is run your good devs off and keep the ones who can’t find new jobs. So you’ll have to do what anyone who manages software teams has always done and trust your line managers to manage your devs. When it comes to people wasting tokens, most people aren’t gonna to do it with the intent to fuck your metrics. But if you tell people you are measuring something they will find a way in increase that metric whether it results in anything productive or not. |
|
| |
| ▲ | tjpnz 3 days ago | parent | prev | next [-] | | Making the tools available is one thing, but saying you're mandating their use at any level sounds like micro management to me. How would you feel if one of your subordinates started telling you how to do your job? I'm sure you would be mightily pissed off about it. | | |
| ▲ | mh- 3 days ago | parent [-] | | I don't think telling people what their job is counts as micro management. Part of their job right now is staying abreast of technological developments and experimenting with new ways of working. Re: some of them being upset about it- probably. Some people are also upset about being required to use Jira. I personally dislike using Okta. | | |
| ▲ | skydhash 3 days ago | parent | next [-] | | It is micromanagement. If the job are not being done, the best way is to investigate what current practices are blocking people from doing it (the answer is probably meetings and bad communication). The worst way is to present a tool as a silver bullet for tasks you’re not doing and not accountable for. | | |
| ▲ | mh- 3 days ago | parent [-] | | Where am I presenting the tool as a silver bullet? You seem to be confusing me with someone else in this thread, or making the mistake of turning this into a polarized conversation of "AI is a panacea" vs "AI is worthless". I engaged in the thread in good faith, and am transparent about what I'm doing and why. I also clarified that part of the job in my org is experimenting with these tools. | | |
| ▲ | skydhash 3 days ago | parent | next [-] | | The complaint in the thread is that management is forcing AI tooling usage. If part of your job is to experiment with these tools, then like any experiment, the correct way is to share the findings with a report detailing the methodology and findings. But no one is doing that AFAIK. It’s all superlatives. | |
| ▲ | sarchertech 3 days ago | parent | prev [-] | | Have you ever monitored and encouraged the use of a particular text editor or IDE? If you had an employee whose manager thought was a high performer, but you noticed they used notepad would you encourage that they regularly give vim a try? The reason we force people to use Jira is because it only works if everyone uses it. AI doesn’t work like that. If it does enhance productivity 50% then use will spread and the expectations of your line managers will naturally go up and the holdouts won’t be able to keep up. Or only the exceptional ones will. And in that case why do you care how they do it? | | |
| ▲ | tetromino_ 2 days ago | parent [-] | | > The reason we force people to use Jira is because it only works if everyone uses it. In my experience, AI out of the box is at first a useless gimmick - until someone starts seriously playing with it and defines a skill file for integrating it with some internal tool. And another person starts playing with it and figures out that AI is pretty good at using another internal tool but only if the tool runs in --silent=1 mode by default, so as not to confuse AI with too much logging output. And a third person figures out that it's actively dangerous to let AI some some other internal tool - but hey, there's a safer alternative, and which happens to perform better too. And pretty soon you end up with an ecosystem of business-specific scripts and .md files and skills and MCPs that's actually helpful 85%+ of the time. But the only way to get there is to get devs and power users tinkering with it. | | |
| ▲ | sarchertech 2 days ago | parent | next [-] | | In my experience all the md files just pollute the context and make it less likely to do what I want it to do. I’m at a huge org with thousands of power users doing all of this and I haven’t seen anything resembling the results you’re seeing. But even assuming this is the case, you don’t create enthusiastic power users with threats (implicit or explicit) and metric tracking. The only thing that does is force people to do the minimum to keep their job. | |
| ▲ | 2 days ago | parent | prev [-] | | [deleted] |
|
|
|
| |
| ▲ | johnnyanmac 2 days ago | parent | prev [-] | | >Part of their job right now is staying abreast of technological developments and experimenting with new ways of working. Not necessarily. A carpenter has a job to make things. Not to use specific tools and keep up with the latest tools used for repairs. It can be suggested, but telling a caprenter what tool to definitely falls under micromanagement. >Some people are also upset about being required to use Jira Jira's job is to report metrics to management. That implicit to the job. Telling people how to perform their tickets is micromanagement. The whole goal of a non-junior employee is to trust they can estimate and accomplish their task. |
|
| |
| ▲ | Henchman21 3 days ago | parent | prev [-] | | Whats your plan for when someone flatly refuses? | | |
| ▲ | mh- 3 days ago | parent [-] | | I'll cross that bridge when I get there. No one who works for me has refused to be paid to try out a new technology when I ensure the time is set aside for them to do so. |
|
| |
| ▲ | 1dontnkow_ 2 days ago | parent | prev | next [-] | | I was thinking about this but sometimes its hard to distinguish between what the marketing of OpenAI, Anthropic and other companies say and what actually all the companies are doing throughout the world. Any thoughts on that? For example even the layoffs, nowadays seem to be because of AI or so they said but just a year or two ago there were quite some layoffs and people said "its because of the high demand in COVID and now its over, or Ukraine or inflation" but then that ignores that exactly during that time earlier were there many layofs but it was super easy "Oh COVID and supply chain!" and earlier maybe something else. Surely there are also economic booms but when did the whole world jsut suddenly started seriously listening to public statements of companies (and that jsut a few with no real income, just money from VCs) and nobody shows us the real data of whats actually happening? E.g. the companies saying they fired 10K due to AI, how much did they actually now direct their budget to AI? How many products are actually being build? Is the productivity the same? Are the customers thinking that support is suddenly amazing or actually it has seriously dropepd in quality? Or no change at all? Is it a company like KFC, your local hardwore chain store, financial isntitutions, truck manufactures or anothet AI company with funding using another AIs company with other funding using now one more AIs company products up to the power suppliers? For me it seems that its definitely impacting things and a cool technology to be more productive (for example it helps me a lot daily but its not like my life really changed) but the other things I haven't seen yet. Another point each actual AI generated app is either something akin to a toilet game or not really working (like the C compiler). So where are the amazing enterprise complicated apps fully built via agents? In banks, in government, in apps that respect GDPR and actually are secure but proudly build only or mostly with agents? The only ones, not even secure, are other AI apps to do AI stuff but its whole value it says is to be more productive in the "real" economy but it still hasn't done it yet anywhere. People still struggle with Word or AWS infra or debugging why some specific user cant log in with their custom auth provider at some esoteric region with their laws and audits and GDPR variant. So one side says its basically a tool from God and they never have created more stuff but on the other hand the other group analyzing blood work, delivering food, writing reports, etc uses it a bit or not at all but all the 95% of problems they had are there with some new ones. Also I'm afraid most of them just write now their email better or with more volume, but no real work is getting done. So yeah maybe my confusion simply lies in that fact that I have a real job and nobody can keep up with all the slop and shit generated online anymore. I'm open to feedback or learn. | |
| ▲ | suhputt 3 days ago | parent | prev | next [-] | | [dead] | |
| ▲ | cineticdaffodil 2 days ago | parent | prev | next [-] | | [dead] | |
| ▲ | jmalicki 3 days ago | parent | prev [-] | | I've also never seen an optional tool become a step change like this Even moving from assembly language to compiled languages was not as much of a step change. |
|
|
| ▲ | nemomarx 3 days ago | parent | prev | next [-] |
| It also seems like skills with particular tech (prompt engineering, harnesses, mixture of experts set ups) doesn't always necessarily pay off when there's a sea change. Hard to predict what you'll want in a few years anyway, right? |
| |
| ▲ | Aurornis 3 days ago | parent | next [-] | | > (prompt engineering, harnesses, mixture of experts set ups) Prompt engineering as a specific skill got blown out of proportion on LinkedIn and podcasts. The core idea that you need to write decent prompts if you want decent output is true, but the idea that it was an expert-level skill that only some people could master was always a lie. Most of it is common sense about having to put your content into the prompt and not expecting the LLM to read your mind. Harnesses isn’t really a skill you learn. It’s how you get th LLM to interact with something. It’s also not as hard as the LinkedIn posts imply. Mixture of Experts isn’t a skill you learn at all. It’s a model architecture, not something you do. At most it’s worth understanding if you’re picking models to run on your own hardware but for everything else you don’t even need to think about this phrase. I think all of this influencer and podcast hype is giving the wrong impression about how hard and complicated LLMs are. The people doing the best with them aren’t studying all of these “skills”, they’re just using the tools and learning what they’re capable of. | | |
| ▲ | Izkata a day ago | parent [-] | | > It’s also not as hard as the LinkedIn posts imply. Keeping in mind the LinkedIn posters/audience (marketers/recruiters), it probably was quite hard for most of them. |
| |
| ▲ | bdcravens 3 days ago | parent | prev | next [-] | | In my experience (and this may be confirmation bias on my part), casting a wide net and trying out new tech, while you maintain depth in the area relevant at the time, makes you ready for what's coming, even when you don't know what that may be. | | |
| ▲ | zephen 3 days ago | parent [-] | | Curiosity is good and helps with your personal development, for sure. OTOH, tfa specifically said: > I feel the same way about the current crop of AI tools. I've tried a bunch of them. Some are good. Most are a bit shit. Few are useful to me as they are now. I'm utterly content to wait until their hype has been realised. So, it's not like he's being deliberate ignorant, rather simply deliberately slow-walking his journey. |
| |
| ▲ | bonesss 3 days ago | parent | prev | next [-] | | Past the sea change: half the reason those prompt and harness solutions seem to work are LLM-lies, the testing is gassing you about how it works and the efficacy, defaulting to ‘yes’. If you test specific features of those solutions over time you see very inconsistent results, lots of lies, and seemingly stable solutions that one-shot well but suddenly experience behaviour changes due to tweaks on the backend. Tuesdays awesome agent stack that finally works is loading totally different on Thursday, and debugging is “oh, sorry, it’s better now” even when it isn’t. Compression, lies, and external hosting are a bad combo. Sometimes I imagine a world where computers executed programs the same way each time. You could write some code once and run it a whole calendar month later with a predictable outcome. What a dream, we can hope I guess. | | |
| ▲ | skydhash 3 days ago | parent [-] | | People are doing toy projects and praising them, while some are testing them in real world situations and not findings them that useful. But the former is labelling the latter as luddites and telling them they will be left behind. | | |
| ▲ | abustamam 3 days ago | parent [-] | | As someone on the intersection of both (I've built a lot of vibe coded toy projects and lead a vibe coding initiative at work), they're both right and both wrong. For a single dev team, vibe coding is great. Write specs, write plans, write code. I know what the project wants and needs because I'm the target market. At work, I haven't written more than a few lines of code since December. But I work with other people vibe coding this same project. Lots of changing requirements and rapid iteration. Lots of mistakes were made by everyone involved. Lots of tech debt. Sure, we built something in 2 mos that would have otherwise taken us 6 mos, but now I'm fixing the mess that we caused. I think the critical difference is the attitude towards our situation. My boss said to fix the AI harness so we can vibe code more confidently and freely. But other bosses might cut their losses and ban vibe coding. Who's right? I dunno. In both cases I'd just do what my boss wants me to do. But it's not that I don't want to be left behind. I don't want to lose my job. There's a difference. | | |
| ▲ | patrick451 2 days ago | parent [-] | | > Sure, we built something in 2 mos that would have otherwise taken us 6 mos, but now I'm fixing the mess that we caused. You didn't actually build it in 2 months. | | |
| ▲ | abustamam 2 days ago | parent [-] | | Even if it takes me a month to get us to fix (likely a week tbh), then it took us 3 months to build. | | |
| ▲ | herewulf 2 days ago | parent [-] | | A mere 2x productivity improvement sounds like something you could achieve by introducing new tools that are predictable (i.e.: Not AI). | | |
| ▲ | abustamam a day ago | parent [-] | | Perhaps. 2x is still 2x. And new tools still need to be vetted and learned. It's strange that the goalpost seems to have moved from "AI is net negative to productivity" to "only 2x improvement isn't worth it" |
|
|
|
|
|
| |
| ▲ | dakolli 3 days ago | parent | prev | next [-] | | All of these occulted skills, that we literally can't explain why they work are akin to gamblers superstitions. If I write something this way, it works. Its like a gambler who think they order in which the push the buttons on the slot machine makes a difference. Kind of weird tools also incorporate addictive gambling game's UX design. They're literally allowing you to multiply your output: 3x, 4x, 5x (run it 5 times for a better shot at a working prompt). You're being played by billionaires who are selling you a slot machine as a thinking machine. | | |
| ▲ | zephen 3 days ago | parent [-] | | > All of these occulted skills, that we literally can't explain why they work are akin to gamblers superstitions. Yes, it's hard to see how, at this moment in time, "Anybody can write code with an LLM" is so different from "Anybody can make money in the stock market." The underlying mechanisms are completely different, of course, and the putative goal of the LLM purveyors is to make it where anybody really can write code with an LLM. I'm typically a nay-sayer and a perfectionist, but many not-so-great things become and stay popular, and this may fall into that category. > Kind of weird tools also incorporate addictive gambling game's UX design. It's unclear it started out this way, but since it's obviously going this way, it is certainly prudent to ask if some of this is by design. It would presumably be more worrisome if there were only a single vendor, but even with multiple vendors, it might be lucrative for them to design things so that "true insider knowledge" of how to make good prompts is a sought-after skill. | | |
| ▲ | oro44 2 days ago | parent [-] | | Broadly speaking, LLMs are destined to fail. Why? Because all the folks involved have created a technology in search for a problem to solve. That never, ever works. Steve Jobs of all people left this piece of wisdom behind. Its amazing how few actually apply it. The internet was never this - its origins go back to the need to able to transmit data - darpa. And this is what we still do now... | | |
| ▲ | zephen 2 days ago | parent | next [-] | | There are a few examples of technologies that only found their application later, such as the glue in post-it notes. And to be fair, Steve Jobs was a master of taking things that had been invented elsewhere, and making them work well enough to foster a demand. But your point stands. Who made the most money, Xerox PARC, or Apple? | |
| ▲ | gnabgib 2 days ago | parent | prev [-] | | Can you stop using them? https://news.ycombinator.com/item?id=47462767 | | |
| ▲ | oro44 2 days ago | parent | next [-] | | I dont use them. | | |
| ▲ | zephen 2 days ago | parent [-] | | The only thing worse than the overuse of AIs is the ever present handwringing and finger-pointing of people who wrongly believe they are infallible AI detectors. | | |
| |
| ▲ | oro44 a day ago | parent | prev [-] | | [flagged] |
|
|
|
| |
| ▲ | dw_arthur 3 days ago | parent | prev | next [-] | | Even two or three years ago I had ideas for projects but I could see the models were not ergonomic for my uses. I decided to wait for better models and sure enough the agentic models showed up which are much easier to use. Next thing I'm waiting on is building a new server for a powerful locally hosted LLM in 5 years. No need to go through the headaches and cost of doing it now with models that may not be powerful enough. | |
| ▲ | stiiv 3 days ago | parent | prev [-] | | Agreed! Investing lightly at this stage seems smart if your time/attention budget is tight. |
|
|
| ▲ | imtringued 3 days ago | parent | prev | next [-] |
| I think this is particularly evident with AI. The early adopters started years ago and they've seen improvements over time that they started attributing them to their own skill. They tell you that if you didn't spend years prompting the AI, it will be difficult to catch up. However, the exact opposite is happening. As the models get better, the need for the perfect prompt starts waning. Prompt engineering is a skill that is obsoleting faster than handwriting code. I personally started using codex in march and honestly, the hardest part was finding and setting up the sandbox. (I use limactl with qemu and kvm). Meanwhile the agentic coding part just works. |
|
| ▲ | vablings 3 days ago | parent | prev | next [-] |
| There really isn't anything special to using AI anyways it's not rocket science. Sometimes I will use AI to write me some tailwind tags, sometimes I will use AI to write me a static site for a custom report. Most of my AI usage comes from doing things I don't enjoy doing like making a series of small tweaks to a function or block of code. Honestly, I just levelled the playing field with vim users and its nothing to write home about |
| |
| ▲ | mpalmer 2 days ago | parent [-] | | > There really isn't anything special to using AI anyways it's not rocket science Could have fooled me, the way some people manage to confuse themselves with it |
|
|
| ▲ | II2II 3 days ago | parent | prev | next [-] |
| I almost entirely agree with the author's assessment of new technology. Yet that statement rubbed me the wrong way. Sometimes it is better to get into things early because it will grow more complex as time goes on, so it will be easier to pick up early in its development. Consider the Web. In the early days, it was just HTML. That was easy to learn. From there on, it was simply a matter of picking up new skills as the environment changed. I'm not sure how I would deal with picking up web development if I started today. |
| |
| ▲ | ashwinsundar 3 days ago | parent | next [-] | | This isn't a good example - people were completing 6-month bootcamps and getting $100k offers to do web development not too long ago, decades after the web and HTML took off. After a few years they were making as much as anyone who learned HTML and Web 1.0 back in the 90s. Are the bootcampers better developers? Probably not. But they still were employable and paid relatively the same. | |
| ▲ | vunderba 3 days ago | parent | prev | next [-] | | I think this applies a bit less to the AI sphere, which has the purported goal of making things easier and more automated over time. 90% of the time if you have an AI question you can just... ask the LLM itself. Remember all the hoopla over how people needed be a "prompt engineer" a couple years back? A lot of that alchemy is basically totally obsolete. Think about the hoops you had to jump through with early GenAI diffusion models: tons of positive prompt suffixes (“4K, OCTANE RENDER, HYPERREALISTIC TURBO HD FINAL CHALLENGERS SPECIAL EDITION”) bordering on magical incantations, samplers (Euler vs. DPM), latent upscalers, CFG scales, denoising strengths for img2img, masking workflows, etc. And now? The vast majority of people can mostly just describe desired image in natural language, and any decent SOTA model can handle the vast majority of use cases (gpt-image-1.5, Seedream 4, Nano-banana). Even when you’re running things locally, it’s still significantly easier than it used to be a few years ago, with options like Flux and Qwen which can handle natural language along with a nice intuitive frontend such as InvokeAI instead of the heavily node-based ComfyUI. (which I still love but understand it's not for everybody). | | |
| ▲ | xboxnolifes a day ago | parent [-] | | Things will never be easier long term, you will just be expected to get more done. And if you dont spend time learning the tools, you will get less done than your competition. The goal posts will move. |
| |
| ▲ | mekoka 3 days ago | parent | prev | next [-] | | "It will grow more complex" is never a good reason to get into things early. It's just your mind playing FOMO tricks on you. Many developers who picked up the web in the early years struggle with (front-end) web development today. It doesn't matter if they fetched jQuery or MooTools from some CDN as it was done in the mid 00s. Once the tooling became too complicated and ever changing they couldn't keep up as front-end dilettante. It required to commit as professionals. If you started today, you'd simply learn the hard way, as it's always been done: get a few books or register for a course. Carve some time every day for theory and practice. All the while prioritizing what matters the most to get stuff done quickly right now, with little fluff. You will not learn Grunt, Bower, and a large array of historic tech. You'll go straight for what's relevant today. That applies to abstractions, frameworks, and tooling, but also to the fundamentals. You'll probably learn ES6+ and TS, not JS WAT. A lot of the early stuff seems like an utter waste of time in retrospect. This is true for all tech. If you knew nothing about LLMs by the end of this year, you could find a course that teaches you all the latest relevant tricks in 5 to 10 hours for 10 bucks. | | |
| ▲ | apsurd 3 days ago | parent | next [-] | | No, this thread and sub-discussion is about specifically early web fundamentals. The web is special in this sense, it's intentionally long-lived warts and all. So the fundamentals pay outsized dividends. The rube goldberg machine that is modern JS dev still spits out an index.html result. Being a good professional developer means getting the primitives and the data model not horribly pointed in the wrong direction. So it's extremely helpful to be aware of those primitives. And the argument "nobody is better off knowing assembly as a primitive" doesn't hold because as-said the web is literally still html files. It's right there in the source. | | |
| ▲ | mekoka 3 days ago | parent [-] | | The discussion is centered around the idea that "adopting early" provides some future proofing in a rapidly evolving (and largely non-standard) terrain. I share the FA's position that it does not. > The web is special in this sense, it's intentionally long-lived warts and all. So the fundamentals pay outsized dividends. Fundamentals pay dividends, but what makes you think that what you learn as an early adopter are fundamentals? Fundamentals are knowledge that is deemed intemporal, not "just discovered". The historical web and its simplicity are as available to anyone today as it was back then. People can still learn HTML today and make table-based layouts. HTML is still HTML, whether you learned it then or today. But if back then you intended to become a professional front-end developer, you would still have to contend with the tremendous difficulties that some seem to have forgotten out of nostalgia. You'd soon have to also learn CSS in its early and buggy drafts, then (mostly non-standard) JavaScript (Netscape and IE6) and the multiple browser bugs that required all kinds of hacks and shims. Then you'd have to keep up with the cycles of changing front-end tools and practices, as efforts to put some sense into the madness were moved there. Much in all that knowledge went nowhere since it was not always part of a progression, but rather a set of competing cycles. Fundamentals are indisputably relevant, but they're knowledge that emerges as victorious after all the fluff of uncertainty has been left behind. Front-end development is only now settling into that phase. With LLMs we're still figuring out where we're going. | | |
| ▲ | sarchertech 3 days ago | parent | next [-] | | This sounds exactly right. I'm someone who learned the web back when IE6 was something we wished everyone was on, and also someone who learned the fundamentals of the web and CS in general enough to try writing a book about it to teach everyone else. Picking up the web early didn't help with the latter. I spent most of my early time memorizing tips and tricks that only applied to old browsers. I didn't pick up the fundamentals till I went back to school for CS and took a networking class. | | |
| ▲ | apsurd 3 days ago | parent [-] | | web fundamentals and web development fundamentals are different. How HTML, CSS and javascript come together is extremely relevant to developers 20 years ago and today. I do support and agree with the parent comment, see the discussion, but I do credit getting into web development when it was raw and open paid dividends for me. Todays ecosystem is opaque in comparison. You don't think there's more friction today? | | |
| ▲ | sarchertech 2 days ago | parent [-] | | HTML CSS and JavaScript are just a small subset of web development. And yes understanding them is still relevant. But when I started I was spending more time memorizing the the quirks of IE6 than I was learning how JavaScript, CSS, and HTML come together. I think it you start directly in react you don’t learn the layer below it sure. But there’s no reason you have to start leaning react. There’s nothing inherent about starting today that forces you to start directly with React. You could start building a static webpage. And if you did that it would be easier and more fundamental than if you did that same thing 20 years ago because you can ignore most of the non-standard browser quirks. |
|
| |
| ▲ | apsurd 3 days ago | parent | prev [-] | | Good points and thoughtful reply. You're right, fundamentals are distilled, so to think they are free just by getting in early is likely backwards. And earning one's professional chops doesn't stop or start based on when you enter. Web dev definitely is nostalgic. I miss the early days but I also conveniently erased ie6, binding data to HTML, the need for backbone and jQuery to do anything. hmmm yeah doesn't matter when you start, it's all a grind if you dig deep enough. | | |
| ▲ | mekoka 3 days ago | parent [-] | | > I also conveniently erased ie6 Also known as PTSD-induced amnesia, haha. We all tried to forget. |
|
|
| |
| ▲ | bdangubic 3 days ago | parent | prev [-] | | > Once the tooling became too complicated and ever changing they couldn't keep up as front-end dilettante. It required to commit as professionals. The best professionals did not fall for insanity of the modern front-end dilettante and continued hacking shit without that insanitity. > You will not learn Grunt, Bower, and a large array of historic tech. You'll go straight for what's relevant today. which will be outdated "tomorrow" just like grunt/bower... are looked at today > A lot of the early stuff seems like an utter waste of time in retrospect. This cannot be further from the truth, if you learned Javascript early, like really learned it, that mastery gets you far today. The best front-end devs I know are basically Javascript developers, everything else is "tech du jour" that comes and goes and the less of it you invest in the better off you'll be in the long-run. > If you knew nothing about LLMs by the end of this year, you could find a course that teaches you all the latest relevant tricks in 5 to 10 hours for 10 bucks. Hard disagree with this unless you are doing simple CRUD-like stuff | | |
| ▲ | mekoka 3 days ago | parent | next [-] | | > The best professionals did not fall for insanity of the modern front-end dilettante and continued hacking shit without that insanitity. "Front-end professional" and "no tooling" have been exclusive propositions since the early 2010s. You either learned to use tools or you were out of the loop. > which will be outdated "tomorrow" just like grunt/bower... are looked at today Not really. Historically, the main problem with front-end development has not been change, but the pace of it. That's how it ties in with the current discussion regarding the (now) ever-changing terrain of LLM-assisted coding. Front-end development is still changing today, but it's coalescing and congealing more than it's revolving. The chasms between transitions are narrowing. If you observe how long Webpack lasted and familiarity with it carried over to using Vite, it's somewhat safe to expect that the latter will last even longer and that its replacement will be a near copy. Someone putting time to learn front-end skills today might reap the benefits of that investment longer. > if you learned Javascript early, like really learned it, that mastery gets you far today. I did. I got a copy of the Rhino book 4th ed. and read it cover to cover. I would not advise to learn JS today with historical references. JS was not designed like most other languages. It was hastily put together to get things done and it had a lot of "interesting", but ultimately undesirable, artifacts. It only slowly turned into a more sensible standard after-the-fact. Yes, there are some parts that are still in its core identity, but a lot in the implementation has changed. Efforts like "Javascript: The Good Parts", further standardization, and TS helped to slowly turn it into what we know today. You don't need to travel back in time for that mastery. Get a modern copy of the Rhino book and you'll be as good as the best of them. | |
| ▲ | aquariusDue 3 days ago | parent | prev [-] | | Yeah, I still get use out of XMLHttpRequest to this day good thing I got in early and variable hoisting isn't gonna get me! /s A lot of snark aside there's a bit of a false dichotomy (I think) here at work. Whenever or wherever your jumping in point is into $something it will always pay dividends to learn the fundamentals of that $something well and unless you interact with older iterations on that $something then you'll never have to bother learning the equivalent of Grunt, Gulp, Stylus, Nunjuncks and so on for that $something. With that being said it's also good to put aside time once a year to check out a good recommended (and usually paid) course from an established professional aimed at busy professionals. As for LLMs I feel it's slowly becoming a thing big enough where people will have to consider where to focus their energy starting with 2027. Kinda like some people branched from web development into backend, frontend and UI/UX a good while back. Do you want to get good at using Claude Code or do you want to integrate gen AI features at work for coworkers to use or customers/users? It's still early days just like when NodeJS started gaining a lot of traction and people were making fun of leftpad. |
|
| |
| ▲ | topaz0 3 days ago | parent | prev | next [-] | | And yet, at some point most web developers will have picked it up after the "raw html" era -- that point has probably come, even. | |
| ▲ | apsurd 3 days ago | parent | prev [-] | | The web/html is a great analogy. I too am in no rush to be hyper effective with LLMs. In fact i want to deliberately slow down because ai-native coding is so exhausting. That said, your point about the leverage of learning html and web in the early days compared to now rings true. pre-compiled isomorphic typescript apps are completely unrecognizable from the early days of index.html |
|
|
| ▲ | garyfirestorm 3 days ago | parent | prev | next [-] |
| Counter point. It’s always advantageous to learn and grow as things evolve. This way you have an active role and maybe a say in how it will evolve. And maybe you could contribute towards that evolution (despite poor execution openclaw showed what LLMs could be doing) > There are a 16,000 new lives being born every hour. They're all starting with a fairly blank slate. Not long ago we were ridiculing genZ for not knowing why save icon looks like a floppy disk. Do you want to feel like that in next 5-10 years? |
| |
| ▲ | td2 3 days ago | parent | next [-] | | The counterpoint is that you will learn jank. If you started early webdev, you learned lots of tricks, that dont benefit a modern webdev. E.g soap, long polling, the JsonP workaround... and so on Many of the Llm frameworks will be seen simular.
Mcp is already kinda heading in the obsolete direction imo, as skills took over | | |
| ▲ | skydhash 3 days ago | parent [-] | | I’ve learned a lot of stuff that don’t really benefits me right now, but now and then I encounter a situation that made me happy that I did. It may never happens for some, but at the time, I was probably happy learning it. But there’s some stuff that I don’t bother explore in depth because my time is finite and I don’t really need it. And anything LLM tooling is probably easier than a random JS framework. Vim’s documentation is probably longer than cursor’s. |
| |
| ▲ | marcd35 3 days ago | parent | prev [-] | | I agree with this point. There is absolutely a 'left behind' gap that is under-explored. My last job was a cable technician - making house calls to fix wifi, satellite tv, phone issues. Mostly elderly residents. The majority of them all were computer and phone illiterate. They were slow adopters to the fast-moving technology and many of them did not know how to operate their devices after we (UI/UX/hardware/software engineer 'we') removed them. I wonder if this also has contributed to the elderly lonliness problem - sure its probably mostly related to physical companionship, acceptance of aging, etc, but the world that they knew (in general and the technological world they grew up in) is no longer recognizable. | | |
| ▲ | skydhash 3 days ago | parent [-] | | But maybe it doesn’t matter that much to them. I don’t know how to skin a rabbit, but that knowledge could be handy in some situations. But I don’t see myself being in that situation other than accidentally. My mother has a phone, but only use it to call. She has never needed a computer even though I spent my teenage years glued to one. But I have like 1 percent of a skill in cooking. | | |
| ▲ | ryandrake 3 days ago | parent [-] | | Exactly. We look at older people and think “oh, look at those poor souls. They don’t know X and Y technologies and they keep doing things the old way! They must feel so left behind.” Nothing is further from the truth. My whole life I’ve lived in neighborhoods full of people 20+ years older than me and not once did I have a neighbor or friend who I thought was overwhelmed with the pace of modern life and upset about how different the world was becoming from what they are used to. This is a trope. People are resilient and adaptive, and as you get older you learn how to embrace new things that actually help and reject new things that don’t. As I get older, I find myself just not caring about a lot of things that younger people care about and not doing a lot of things they do. I don’t use social media, I still pay for things with cash and checks, I don’t understand or care about the Kardashians or reality tv. My phone is 8 years old. I listen to prog rock and new wave music, and I probably couldn’t name a single popular musical performer today (besides Taylor Swift because I have a daughter). I don’t feel even slightly “left behind” or “obsolete.” |
|
|
|
|
| ▲ | gradus_ad 3 days ago | parent | prev | next [-] |
| But it's so easy to try something like Claude Code. It's not like you need to get up to speed. There is no learning curve*, that's the nature of AI. Just start using it and you'll see why it has attracted so much hype. *I should qualify that "using" CC in the strict sense has no learning curve, but really getting the most out of it may take some time as you see its limitations. But it's not learning tech in the traditional sense. |
| |
| ▲ | we_have_options 3 days ago | parent | next [-] | | I've been playing with it on weekends for the last few months. 9 out of 10 projects, it's failed. Projects as simple as "set up a tmux/vim binding so I can write prompts in one pane and run claude in the other". Fails. I've been coding for over 20 years. If there is no learning curve, why doesn't it work for me? You can't say I'm not using it right, because if that was true, then all I need to do is climb the learning curve to fix that, the curve that you say doesn't exist. | | |
| ▲ | 6DM 3 days ago | parent | next [-] | | It doesn't work if you're treating it like a peer engineer. It only works if you treat it like you're a customer with no concern with how it works behind the scenes. That's what's being asked of me in my last two jobs. Vibe code it, if it's bad just throw it away and regenerate it because it's "cheap". The only thing that matters is that you can quickly generate visible changes and ship it to market. Out of frustration I asked upper management (in my current job), if you want me to use AI like that then I'll do it. But when it inevitably fails, who is responsible? If there's no risk to me, I will AI generate everything starting today, but if I have to take on the risk I won't be able to do this. Their response was that AI generates the code, I'm responsible for reviewing it and making sure it's risk free. I can see that they're already looking for contractors (with no skin in the game) that are more than willing to run the AI agents and ship vibe code, so I'm at a loss on what to do. | |
| ▲ | hombre_fatal 3 days ago | parent | prev | next [-] | | I've used Claude Code to do everything from vibe-code personal apps including a terminal on top of libghostty to building my perfect desktop environment on NixOS (I'd never used Nix until then). I'm not sure why it isn't working for you. Maybe your expectation is a perfect one-shot or else it has zero value, and nothing in between? But my advice is to switch gears and see the "plan file" as the deliverable that you're polishing over implementation. It's planning and research and specification that tends to be the hard part, not yoloing solutions live to see if they'll work -- we do the latter all the time to avoid 10min of planning. So, try brainstorming the issue with Claude Code, talk it through so it's on the same page as you, ensure it's done research (web search, docs) to weigh the best solutions, and then enter plan mode so it generates a markdown plan file. From there you can read/review,tweak the plan file. Or have it implement it. Or you implement it. But the idea is that an LLM is useful at this intermediate planning stage without tacking on additional responsibilities. I think by "no learning curve" they are referring to how you can get value from it without doing the research you'd need to use a conventional tool. But there is a learning curve to getting better results. I learned my plan file workflow just from Claude Code having "Plan Mode" that spits out a plan file, and it was obvious to me from there, but there are people who don't know it exists nor what the value of it is, yet it's the centerpiece of my workflow. I also think it's the right way to use AI: the plan/prompt is the thing you're building and polishing, not skipping past it to an underspecified implementation. Because once you're done with the plan, then the impl is trivial and repeatable from that plan, even if you wanted to do the impl yourself. I'm way past the point of arguing anything here, just trying to help. | | |
| ▲ | mat_b 3 days ago | parent | next [-] | | > So, try brainstorming the issue with Claude Code, talk it through so it's on the same page as you, ensure it's done research (web search, docs) to weigh the best solutions, and then enter plan mode so it generates a markdown plan file.
From there you can read/review,tweak the plan file. Or have it implement it. Or you implement it. This is exactly the workflow that works very well for me in Cursor (although I don't use their Plan Mode - I do my version of it). If you know the codebase well this can increase your speed/productivity quite a bit. Not trying to convince naysayers of this, their minds are already made up. Just wanted to chime in that this workflow does actually work very well (been using it for over 6 months). | |
| ▲ | aquariusDue 3 days ago | parent | prev [-] | | The first time I saw something like this in action was in a video about agentic blabla features in VS Code on the official VS Code YouTube channel. Pretty much write a complete and detailed specification, fire away and hope for the best. The workflow kinda clicked for me then but I still find a hard time adjusting to this potential new reality where slowly it won't make sense to generally write code "by hand" and only intervene to make pinpoint changes after reviewing a lot of code. I've been reading a book about the history of math and at some points in the beginning the author pointed out how some fields undergo a radical change within due to some discovery (e.g. quantum theory in physics) and the practitioners in that field inevitably go through this transformation where the generations before and after can't really relate to each other anymore. I'm paraphrasing quite a bit though so I'll just recommend people check out the book if they're interested: The History of Mathematics by Jacqueline Stedall And the aforementioned VS Code video, if I remember correctly: https://youtu.be/dutyOc_cAEU?si=ulK3MaYN7_CPO76k | | |
| ▲ | hombre_fatal 2 days ago | parent | next [-] | | I haven't written code by hand since December when Claude Opus 4.5 came out. It was clear that the inflection point arrived where it's at least as good as I am at implementing a plan. But not only that: it had good ideas like making impossible states impossible with a smart union type without being told and without me deeply modeling the domain in my head to derive a system invariant I could encode like that. It was depressing watching all of this unfold over the last few years, but now I'm taking on more projects and delivering more features/value than ever before. That was the reason I got into software anyways, to make good software that people like to use. > the generations before and after can't really relate to each other anymore Yeah, good point. In some ways it's already crazy to me that we used to write code by hand. Especially all the chore work, like migrating/refactoring, that's trivial for even a dumb LLM to do. It kinda feels like a liability now when I'm writing code, kinda like how it feels when the syntax highlighting or type-checker breaks in the editor and isn't giving you live feedback, so you're surprised when it compiles and runs on the first try. I remember having a hard time imagining what it was like for my dad to stub out his software program on paper until his scheduled appointment with the university punch card machine. And then sure being happy that I could just click a Run button in my editor to run my program. | |
| ▲ | 2 days ago | parent | prev [-] | | [deleted] |
|
| |
| ▲ | gradus_ad 3 days ago | parent | prev | next [-] | | Did it not work after the first try and you gave up? Did it not produce any usable code that you could hand tweak or build off of? I want to understand your definition of "failed" here. | | |
| ▲ | laserlight 3 days ago | parent [-] | | What's your definition of "working"? Do you consider it working, when you have to put more effort into prompting back-and-forth than writing it the old way? | | |
| ▲ | whateveracct 2 days ago | parent [-] | | I honestly think the people who love Claude were not super proficient coders. That's the only thing I can think of to explain why writing gobs of English and then code reviewing in a loop could be easier than just coding yourself. |
|
| |
| ▲ | bigstrat2003 3 days ago | parent | prev | next [-] | | > If there is no learning curve, why doesn't it work for me? Because LLMs are not actually good at programming, despite the hype. | | |
| ▲ | whateveracct 2 days ago | parent [-] | | I think they are better than a lot of people though, which is where their fans come from. |
| |
| ▲ | skybrian 2 days ago | parent | prev | next [-] | | There definitely is a learning curve. Not sure what you're doing. Are you trying to one-shot it? I think a decent place to start is: given a small web app, give it a bug report and ask it what causes the bug. | |
| ▲ | Kiro 3 days ago | parent | prev [-] | | Failing 9 out of 10 times for such simple tasks is indeed puzzling. I have no idea what you're doing to achieve that but I'm impressed. |
| |
| ▲ | JohnFen 3 days ago | parent | prev | next [-] | | > There is no learning curve*, that's the nature of AI. There isn't? Then why is it that whenever devs have tried it and not achieved useful results, they're told that they just haven't learned how to use it right? | | |
| ▲ | laserlight 3 days ago | parent | next [-] | | “You're holding it wrong.” is the most common response I get, when I talk about problems I had with LLM-assisted coding. | | |
| ▲ | leptons 3 days ago | parent [-] | | You aren't holding it wrong, the truth is AI is a mixed bag, leaning towards a liability. If people really counted all the time they spend coddling the AI, trying again, then trying again and again and again to get a useful output, then having to clean up that output, they would see that the supposed efficiency gains are near zero if not negative. The only people it really helps are people who were not good at coding to begin with, and they will be the ones producing the absolute worst slop because they don't know the difference between good and bad code. AI is constantly trying to introduce bugs into my codebase, and I see it happening in real-time with AI code completion. So, no you aren't "holding it wrong", the other people are no different than the crypto-bro's who were pushing blockchain into everything and hoping it would stick. | | |
| ▲ | sarchertech 3 days ago | parent | next [-] | | Imagine you are a JS dev and github comes out with a new search feature that's really good. it lets you use natural language to find open source projects really easily. So whenever you have a new project you check to see if something similar exists. And instead of starting from scratch you start from that and tweak it to fit what you want to do. If you were the type of person who makes tiny toy apps, or you worked on lots of small already been done stuff, you'd love doing this. It would speed you up so much. But if you worked on a big application with millions of users that had evolved into it's own snowflake through time and use, you'd get very little from it. I think I probably could benefit from looking at existing open source solutions and modifying them a lot of the time, and I kinda started out doing that at first. But eventually you realize that even though starting with something can save you time, it can also cost you a ton of time so it's frequently a wash or a net negative. | | |
| ▲ | leptons 2 days ago | parent [-] | | Nothing you described in this comment is only achievable with "AI". I've been able to search for and find open source projects since forever, and fork them and extend them, long before an LLM was a glimmer in Sam Altman's beady eye. | | |
| ▲ | sarchertech 2 days ago | parent [-] | | No it’s not at all. AI just makes finding it faster. But that’s my point AI isn’t that different from what you could already do before. Most of us didn’t do things that way before, so maybe programming like that is just a bad idea. |
|
| |
| ▲ | laserlight 3 days ago | parent | prev [-] | | > If people really counted [...] Exactly. I counted and reported my results in a previous thread [0]. [0] https://news.ycombinator.com/item?id=47272913 | | |
| ▲ | leptons 2 days ago | parent [-] | | I've started "racing" Claude when I have a somewhat simple task that I think it should be able to handle. I spend a few minutes writing out detailed instructions, which I already knew because I had to do initial discovery around the problem domain to understand what the goal was supposed to be. It took a while to be thorough enough writing it down for Claude, which is time I did not need to spend if I had just started writing the code myself - I'm sure the AI-bro's aren't considering the time it takes just to write down instructions to Claude vs just start coding. So then Claude starts discecting the instructions. I start writing some code. After a while Claude is done, and I've written about two or three dozen lines of code. Claude is way off, so I have to think about why and then write more instructions for it to follow. Then I continue coding. After a while Claude is done, and I've written about three dozen more lines of code. Claude is closer this time, but still not right. Round 3 of thinking about how Claude got it wrong and what to tell it to do now. Then I continue coding. After a while Claude is done (yet again), and I've written a lot more code and tested it and it's working as needed. The output Claude came up with is just a little bit off, so I have it rework the output a little bit and tell it to run again. I downloaded the resulting code Claude wrote and compared it to my solution, and I will take my solution every single time. Claude wrote a bloated monstrosity. This is my experience with "AI", and I'm honestly not loving it. It does sometimes save me time converting code from one language to another (when it works), or implementing simple things based on existing code (when it works), and a few other tasks (when it works), but overall I end up asking myself over and over "Is this really how developers want the future to be?" I'm skeptical that these LLM-based coding tools will ever get good enough to not make me feel ill about wasting my time typing instructions to them to produce code that is bloated and mostly not reusable. | | |
| ▲ | whateveracct 2 days ago | parent | next [-] | | I've done the racing thing too. Or I just reject its suggestions, do it better, and have it review and tell me why I did better. And writing those instructions when I race it..it's more cognitive effort for me than coding! | |
| ▲ | oro44 2 days ago | parent | prev [-] | | Interesting stuff. Thx for sharing! |
|
|
|
| |
| ▲ | bigstrat2003 3 days ago | parent | prev [-] | | Because the AI bros hyping it up are incapable of admitting that the hype is overblown. That would mean they have nothing to sell you, so of course they aren't going to say that. |
| |
| ▲ | artine 3 days ago | parent | prev | next [-] | | I gave Claude Code with Sonnet 4.6* a try a few weeks ago. I pointed it at a hobby project with less than 1kloc of C (about 26,500 characters) across ~10 modules and asked it to summarize what the project does. It used about $0.50 worth of tokens and gave a summary that was part spot on and part hallucinated. I then asked it how to solve a simple bug with an easy solution. It identified the right place to make the fix but its entire suggested solution was a one-liner invoking a hallucinated library method. I use LLMs pretty regularly, so I'm familiar with the kinds of tasks they work well on and where they fall flat. I'm sure I could get at least some utility from Claude Code if I had an unlimited budget, but the voracious appetite for tokens even on a trivially small project -- combined with a worse answer than a curated-context chatbot prompt -- makes its value proposition very dubious. For now, at least. * I considered trying Opus, but the fundamental issue of it eating through tokens meant, for me, that even if it worked much better, the cost would dramatically outweigh the benefit. | |
| ▲ | adriand 3 days ago | parent | prev | next [-] | | I think working with the technology gives you powerful intuitions that improve your skill and lead to better outcomes, but you don't really notice that that's what's happening. Personally speaking - and I suspect this is true of most people in general - I have very poor recollections of what it was like to be really bad/new at things that I am now very skilled at. If you have try teaching someone something from the absolute ground up, you will quickly realize that a huge number of things you now believe are "standard assumptions" or "obvious" or "intuitive" are actually the result of a lot of learning you forgot you did. | |
| ▲ | ErroneousBosh 3 days ago | parent | prev | next [-] | | I tried it. Either I don't know how to use it, or it just doesn't work. | |
| ▲ | xigoi 3 days ago | parent | prev | next [-] | | It’s only “easy to try” if you’re okay with using proprietary software and having to rely on an evil megacorporation that engages in cyber-warfare. | | |
| ▲ | archagon 3 days ago | parent [-] | | Not to mention sucking on a monthly subscription tit that will go up in price by an order of magnitude once the market is captured. |
| |
| ▲ | hombre_fatal 3 days ago | parent | prev | next [-] | | I think it comes down to your own personality, appetite, and also how external factors like hype might impact you (resent, annoyance, curiosity, excitement). | |
| ▲ | nDRDY 3 days ago | parent | prev | next [-] | | Then what is the point? If what I'm doing can be done by Claude, as operated by someone who "doesn't need to get up to speed", then I really need to look at another career. | |
| ▲ | nunez 3 days ago | parent | prev [-] | | There's no learning curve if you don't care about token spend. |
|
|
| ▲ | isk517 3 days ago | parent | prev | next [-] |
| I've let tech pass me by many times, then the tech that passed me which I was never in a position to use got replaced by the next big tech innovation. I've found that you can climb aboard the train at anytime since everything new is a lot easier to get started on than learning C and having to manually allocate memory. |
|
| ▲ | randusername 3 days ago | parent | prev | next [-] |
| Counterpoint: Mistakes are less costly in the beginning and the knowledge gained from them is more valuable. Over-sharing on social media. Secret / IP leaks with LLMs. That kind of thing. I agree: FOMO is an all-in mindset. Author admits to dabbling out of curiosity and realizing the time is not right for him personally. I think that's a strong call. |
|
| ▲ | JKCalhoun 2 days ago | parent | prev | next [-] |
| Some might be getting into AI in order to sell AI. As OpenClaw has shown, there is opportunity in this space to be a trailblazer. There are no doubt companies that are not tech-aligned that someone could help set up local LLMs for… For me though, I'm dabbling in AI because it fascinates me. Bitcoin was like, I don't know, Herbalife? —never interesting to me at all. |
|
| ▲ | wslh 3 days ago | parent | prev | next [-] |
| We've seen multiple ideas/products get quickly absorbed into frontier models, OSS, or well-funded startups. The cycle from "interesting idea" to "commoditized feature" is getting very short. Personally, there were three of these in the last year. And even if your product is genuinely great, distribution is becoming the real bottleneck. Discovery via prompting or search is limited, and paid acquisition is increasingly expensive. One alternative is to loop between build and kill, letting usage emerge organically rather than trying to force distribution. |
|
| ▲ | ozim 3 days ago | parent | prev | next [-] |
| I think it was challenging 2 or 3 years ago. I plunged a year ago and it already was quite easy to use mainstream tools. I could run some local models with Ollama by just installing it. I could use coding assistance in VSCode. Connecting with http API to use AI within applications you build was also easy for local models or cloud. There are loads of BS tools out there of course but I don’t use that many tools. |
|
| ▲ | abustamam 3 days ago | parent | prev | next [-] |
| Broadly speaking I agree. But the reality for many SWEs is that if they don't learn new AI tools they'll get let go. It's use AI or be replaced by AI (or, more accurately, be replaced by someone using Ai) for many folks. I think it's a luxury to be able to ignore a trend like AI. crypto was fine to ignore because it didn't really replace anyone, but Ai is a different beast |
| |
| ▲ | gamerdonkey 3 days ago | parent [-] | | > It's use AI or be replaced by AI I think they want to sell that perception, but the biggest thing the tech execs want in their SWEs is fear. Fear that makes us stay in our jobs, even when raises and bonuses stagnate, because the market is scary with all the "AI layoffs" (which have largely been regular downsizing with the AI label slapped on). Fear that makes us use the LLMs and then put in extra hours when we don't see the 10x productivity gains that they expect. Fear that makes us erode our own skills and become dependent on these gatekeepers to maintain even a base level of productivity. So much of the advertising and discussion around AI is based around fear. It's inevitable. It will take your job. Render you useless. It will render humanity useless. You better get in now, or you'll both lose your job and could end up in a virtual hell (https://en.wikipedia.org/wiki/Roko%27s_basilisk). | | |
| ▲ | abustamam 3 days ago | parent [-] | | I think you're right that it is fear. But it's kind of a self fulfilling fear. Teams are requiring employees to use AI, probably because of this fear. Maybe it isn't directly taking my job, but it is necessary for me to use Ai to keep my job. | | |
| ▲ | gamerdonkey 2 days ago | parent | next [-] | | I agree that it's self-fulfilling. I just feel that it's being used more as a tool for compliance than for direct productivity. | | |
| ▲ | abustamam 2 days ago | parent [-] | | That certainly could be true for some teams. Personally I think AI has directly improved my productivity. There's certainly sharp edges and we definitely shot ourselves in the foot more than a few times in ways we wouldn't have without AI, but by and large we've built an internal app where no human wrote a single line of code. I don't think we'd have been able to build it in 2 months without AI. We could have gone faster but the bottleneck was product and user testing, not code. |
| |
| ▲ | sarchertech 3 days ago | parent | prev [-] | | You can always hang your shingle and work for clients directly. | | |
| ▲ | abustamam 2 days ago | parent [-] | | Sure, in the same way I can always quit being an engineer and become a farmer. It can work for some, but not everyone. FWIW I do client work on the side. Full time client work has always been more draining than just having a regular job IME. Maybe I just can't find the right clients, but that's not something I have to worry about when I work for a company. | | |
| ▲ | sarchertech 2 days ago | parent [-] | | I wasn’t talking to everyone I was talking to you. And becoming a contractor is a lot more viable than becoming a farmer. Client work is hard, but you have to decide if the freedom to work the way you want is worth it. | | |
| ▲ | abustamam 2 days ago | parent [-] | | Oh. Well personally I don't mind using AI, and I use AI when I do client work as well (they know and some clients who use AI apps like lovable expect it). But I know not everyone is in my shoes which is where my comment is coming from. |
|
|
|
|
|
|
|
| ▲ | brandonmenc 3 days ago | parent | prev | next [-] |
| This is true in my experience. I waited until it seemed good enough to use without having to spend most of my time keeping up with the latest magical incantations. Now I have multiple Claude instances running and producing almost all of my commits at work. Yes, with a lot of time spent planning and validating. |
|
| ▲ | spwa4 3 days ago | parent | prev | next [-] |
| This is the central thing that changes in a person with age. When you are born, the only thing you do is pick up new things. Literally nothing else. When you're young, picking up new things is how you improve your social position. It's what you do to even be talked to in the first place. It's what you do to get a girl/boyfriend, or be the best student in class, or to be the best (or worst even) employee at your first job ... Once you have a good social position, or at least one you're happy with, you stop doing this, and you grow ever more irritated at others doing it ... because it's your social position that they're coming after. And they're younger, more motivated and hungrier. More than that, a decent chunk of these people want a better social position, even if that means taking yours. |
| |
| ▲ | sarchertech 3 days ago | parent [-] | | Sure, but this is mostly irrelevant because software dev is one of the youngest fields. Something like 70% of professional devs are under 35 depending on which survey you look at. And the number of CS graduates exploded over the last decade. The only reason the numbers aren’t even more tilted is because people stopped hiring juniors 2 years ago. But they’re out there, and if there’s a new technology around that makes them vastly more productive than the seniors today, there’s nothing stopping companies from hiring them. |
|
|
| ▲ | logicchains 3 days ago | parent | prev | next [-] |
| Even if it reaches the end state of AGI, e.g. AI that's smarter and more capable than 90% of humans, there'll still be a huge learning curve to using it well, as anyone who's tried managing very smart humans can attest. |
|
| ▲ | postalcoder 3 days ago | parent | prev | next [-] |
| The thing is, this post is hitting a straw man. ngmi culture was deeply toxic and pervasive in crypto. I think the people who are really into LLMs are having a blast. |
| |
| ▲ | stavros 3 days ago | parent [-] | | I'm definitely having a blast, but I agree with the author. You're not going to get left behind, the "getting left behind" rhetoric was just cryptocurrency pump-and-dumpers. It's fine to wait and not engage if you don't want to. | | |
| ▲ | postalcoder 3 days ago | parent | next [-] | | I agree with you, which is why I think it's a straw man. How many real devs are actually banging the "you're getting left behind!" drums? | | |
| ▲ | mekael 3 days ago | parent | next [-] | | I had an heavy ai user on my team say that “those who learn how ti use the tools wont get fired, those who dont are gone”. I used it to generate a bunch of cfn and it worked fine from an example and a couple line prompt, doesnt seem that hard to learn to me. Now reviewing the 1k lines it generated and making sure its secure, thats going to take me longer than writing it by hand. | | |
| ▲ | stavros 3 days ago | parent | next [-] | | Yeah, I think this is it. If you don't learn to use them, you'll be much slower than people who do, but also they're not really that hard to learn, so it's not super urgent. | | |
| ▲ | mekael 3 days ago | parent [-] | | I'm still confused about the things I'll be slower in though, and I'm being sincere with that confusion. If it's "boilerplate", then I haven't done enough research or pick a library which has little to none of that, or I'm not using the template(s) built into whatever framework I am using. For example, in one of the projects I'm working on, I'm using the VSA pattern. I have the list of 50 to 75 features I need to implement and what "categories" they slot into, I have all of the frameworks and libraries picked out, and I have built out "feature templates" with all of the boilerplate setup (I'm reusing these over multiple projects going forward). for each of the features all I need to do is 'ftr new {FEATURE_TYPE} {FEATURE_NAME} {OUTPUT_FOLDER}' and then plug int the domain specific business logic. I'll most likely use Claude/Codex/Whatever to write out some of my tests, but the majority of the 'boilerplate' is already done and I'm just sorting out the pieces that matter / can't be automated. Am I missing something huge with these tools? Don't get me wrong, for doing reverse engineering they're great helpers and I've made a tonne of progress on projects that had been languishing. | | |
| ▲ | stavros 3 days ago | parent [-] | | I find that can write features 5-10x faster with these tools than by hand, at a comparable level of quality (though it hasn't been long enough for me to judge what'll happen in a year). | | |
| ▲ | mekael 3 days ago | parent [-] | | Would you be able to give an example of a feature? For my example, I need to query an ancient undocumented database , pull back a pile of data, do some validations on it, and then show it to the user or pass it along with another processing step. The human part is researching the database and the data living in it, and implementing the validation(s) while talking to a business user, everything else can be templated. | | |
| ▲ | stavros 3 days ago | parent [-] | | Oh yes, this is what LLMs excel at. Introspecting a database, either the schema or the live data, running a few checks to see whether all the data had the same shape (or how many different shapes it has), writing validations to catch edge cases, they do this extremely quickly and pretty accurately, whereas it would have taken me hours of trawling. Then I can look at the output and say things like "what if the data is lowercase?" or anything else I suspect they may have missed. A few rounds of these and I have a pretty good feel for the quality of the resulting checks, while taking a few minutes of my attention/tens of minutes of wallclock time to do. I have a more detailed example here: https://www.stavros.io/posts/how-i-write-software-with-llms/ I'd share all my plans but I once found that the LLM used my actual phone number as test data, so I don't share those any more, just in case. |
|
|
|
| |
| ▲ | logicchains 3 days ago | parent | prev [-] | | >Now reviewing the 1k lines it generated and making sure its secure, thats going to take me longer than writing it by hand. Then you still need to learn how to use the tools to speed up reviewing the code. | | |
| ▲ | archagon 3 days ago | parent | next [-] | | You're not actually doing engineering if you're just vibe-coding, reviewing, and testing all the way down. What the hell is that? Just a weird simulacrum of software development that will break apart in unpredictable ways. Security consultants are going to have very lucrative careers in the coming years. | |
| ▲ | mekael 3 days ago | parent | prev [-] | | If I don't have experience with the underlying framework/language/thing being modified, it makes it quite difficult to trust the actual review. In this example, I haven't worked heavily with Cloudformation, so I can't call b.s if it leaves a database instance exposed to the wider public internet rather than in my company's private VPC. | | |
| ▲ | logicchains 3 days ago | parent [-] | | You can ask the agent to check that it doesn't leave a database instance exposed to the public, and present you with proof for you to check (references to the code and the relevant Cloudformation documentation). Then repeat this for all the things you'd normally want to check for in a code review. | | |
| ▲ | mekael 3 days ago | parent [-] | | In that case I'm just moving the reading of the documentation from reading it as I'm writing the yaml to when I'm doing a code review. Not saying it isn't helpful to have a pair researcher, just seems like I'm moving things around . |
|
|
|
| |
| ▲ | apsurd 3 days ago | parent | prev | next [-] | | It can be implicit though. The llm person having a blast is compelled to push everyone to see what they see. If they have a leadership role at their company, then the getting-left-behind drum does get banged in the form of "ai native company transformation" initiatives. | |
| ▲ | bigstrat2003 3 days ago | parent | prev | next [-] | | I have personally heard people say this at work. It's not a strawman, there really is a message of "you'll be left behind" out there. | |
| ▲ | SpicyLemonZest 3 days ago | parent | prev | next [-] | | Lots, and not just online. I run into them regularly in my office, and so do my friends and family in tech. One of my coworkers is now spending all his time writing SKILLs, he's convinced that we'll never need to solve operational issues again if we have the right SKILLs. | |
| ▲ | whateveracct 2 days ago | parent | prev | next [-] | | > How many real devs are actually banging the "you're getting left behind!" drums? My CEO/CTO :) | |
| ▲ | bena 3 days ago | parent | prev | next [-] | | You're using a no-true-Scotsman to accuse the author of a strawman. Consider that. | |
| ▲ | bleuarff 3 days ago | parent | prev | next [-] | | I don't know for devs, but that's the message we get from upper management. | |
| ▲ | mrguyorama 3 days ago | parent | prev | next [-] | | Devs don't make hiring and firing decisions. | |
| ▲ | Fraterkes 3 days ago | parent | prev | next [-] | | I think FOMO-aligned ai stuff is fairly common on HN, doesn't mean it's always deliberately manipulative. | |
| ▲ | foolserrandboy 3 days ago | parent | prev [-] | | The executives are, not the devs. |
| |
| ▲ | plagiarist 3 days ago | parent | prev [-] | | I'm not worried about being left behind technologically, but I am worried about being left behind after every company on the planet decides we need N years experience in AI to be employable. | | |
| ▲ | stavros 3 days ago | parent [-] | | I already have 30 years of experience in LLMs, if you believe my CV, so I'm not worried. |
|
|
|
|
| ▲ | theptip 3 days ago | parent | prev | next [-] |
| Ok, here is the risk of being left behind - if we have moderately fast take-off, the 1-2 years required to upskill in AI might mean you find yourself unemployable when your role gets axed. I don’t think folks are taking seriously the possible worlds at the P(0.25) tail of likelihood. You do not get to pick up this stuff “on a timescale of my choosing”, in the worlds where the capability exponential keeps going for another 5-10 years. I’m sure the author simply doesn’t buy that premise, but IMO it’s poor epistemics to refuse to even engage with the very obvious open question of why this time might be different. |
| |
| ▲ | msabalau 3 days ago | parent | next [-] | | But they have engaged with it, and made an assessment about it's current utility. We have no reason to believe that they won't keep an eye on this. Little to nothing about AI tools so far suggests that that one can't just as easily pick the skills later. Tools that will get "exponentially better" will almost certainly be unrecognizable to someone desperately engaging with them now, for not other reason the sake of "having 1-2 years of experience" Someone might reasonably choose to to bet on the upside. That doesn't imply that everyone else ought to fearfully hedge. | |
| ▲ | SpicyLemonZest 3 days ago | parent | prev | next [-] | | I don't think there's such a thing as a "fast take-off" where human experience with 2026-era LLM coding remains economically relevant. | |
| ▲ | pianopatrick a day ago | parent | prev | next [-] | | Won't AI getting better also mean AI will be getting easier to learn and use? Feels to me like there are at least one or two more paradigm shifts coming in how AI gets used which will make current tools obsolete. As one example, I think we will eventually get GUI dashboards to manage AI agents which will be easier to use than current CLI tools. | |
| ▲ | duskdozer 3 days ago | parent | prev | next [-] | | Eh, I'm not super worried. After all, Every six months or so, the latest model changes everything and the former model was complete garbage. It's not just a new model—it's a new paradigm shifting the landscape of agentic development. | |
| ▲ | fatata123 3 days ago | parent | prev [-] | | [dead] |
|
|
| ▲ | sirspacey 2 days ago | parent | prev | next [-] |
| i thought so too. but now we are onboarding project managers in non-tech fields to Claude Code and they are crushing it. on a terminal. vs code. the first thirty min is the hard part, after that the feedback loop kicks in. they ask for what they really want, they get it. |
|
| ▲ | agentultra 3 days ago | parent | prev | next [-] |
| One area where it may end up leaving you behind is if you’re looking for a job right now. There are a lot of companies putting vibe coding in their job requirements. The more companies that do this the harder it will be to find employment if you’re not adopting this tool/workflow. |
|
| ▲ | m463 3 days ago | parent | prev | next [-] |
| I don't know, I kind of wonder if this applies to all technologies equally. for example, (dodging the whole full-self-driving controversy) tesla cars have had advanced safety features like traffic aware cruise control and autosteer for over a decade. so, buying into safety early... for other technologies, there's sort of the rugpull effect. The people who get in early enjoy something with little drama vs the late adopters. ask people who bought into sonos early vs late, probably more exampless of this. so getting technology the founders envisioned, vs later enshittified versions. |
|
| ▲ | fantasizr 3 days ago | parent | prev | next [-] |
| somehow the ai bros are saying creating .md files is the real ingenuity, and couldn't be learned in say half a day. There's absolutely no rush to keep up with the latest code producing tools especially when they're all "pay to play". |
|
| ▲ | casey2 3 days ago | parent | prev [-] |
| No. It just assumes there is no utility in the underlying tech, someone who believes vaccines don't work could make the same argument. Most people trust Morgan-Stanley when it comes to financial instruments more than some bozo on the internet. You do have to drag stubborn people, kicking and screaming, into the future or they will continue using old tech. The article is framed in the past tense, "someone tried", "the crypto grift was". As if it's not currently swallowing the world. I guess he is so maximally sensible that he self-assess faster than MS and realizes bitcoin just isn't for him every time. He has a strange hyper-specific definition of utility and productivity, (wrote my MSc, had fun) don't count. |