| ▲ | stack_framer 4 hours ago | ||||||||||||||||||||||||||||||||||||||||||||||
> wait six months. I mourn having to repeatedly hear this never-quite-true promise that an amazing future of perfect code from agentic whatevers will come to fruition, and it's still just six months away. "Oh yes, we know we said it was coming six, twelve, and eighteen months ago, but this time we pinky swear it's just six months away!" I remember when I first got access to the internet. It was revolutionary. I wanted to be online all the time, playing games, chatting with friends, and discovering new things. It shaped my desire to study computer science and learn to develop software! I could see and experience the value of the internet immediately. It's utility was never "six months away," and I didn't have to be compelled to use it—I was eager to use it of my own volition as often as possible. LLM coding doesn't feel revolutionary or exciting like this. It's a mandate from the top. It's my know-nothing boss telling me to "find ways to use AI so we can move faster." It's my boss's know-nothing boss conducting Culture Amp surveys about AI usage, but ignoring the feedback that 95% of Copilot's PR comments are useless noise: "The name of this unit test could be improved." It's waiting for code to be slopped onto my screen, so I can go over it with a fine-toothed comb and find all the bugs—and there are always bugs. Here's what I hope is six months away: The death of AI hype. | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | onion2k 3 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
This feels right when you're looking forwards. The perfect AI bot is definitely not 6 months away. It'll take a lot longer than that to get something that doesn't get things wrong a lot of the time. That's not especially interesting or challenging though. It's obvious. What's much more interesting is looking back 6, 12, 18, or 24 months. 6 months ago was ChatGPT 5, 12 months ago was GPT 4.5, 18 months ago was 4o, and 24 months ago ChatGPT 3.5 was released (the first one). If you've been following closely you'll have seen incredible changes between each of them. Not to get to perfect, because that's not really a reasonable goal, but definite big leaps forward each time. A couple of years ago one-shotting a basic tic tac toe wasn't really possible. Now though, you can one-shot a fairly complex web app. It won't be perfect, or even good by a lot of measures compared to human written software, but it will work. I think the comparison to the internet is a good one. I wrote my first website in 1997, and saw the rapid iteration of websites and browsers back then. It felt amazing, and fast. AI feels the same to me. But given the fact that browsers still aren't good in a lot of ways I think it's fair to say AI will take a similarly long time. That doesn't mean the innovations along the way aren't freaking cool though. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | davnicwil 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Something I'm finding odd is this seemingly perpetually repeating claim that the latest thing that came out actually works, unlike the last thing that obviously didn't quite work. Then next month, of course, latest thing becomes last thing, and suddenly it's again obvious that actually it didn't quite work. It's like running on a treadmill towards a dangling carrot or something. It's simultaneously always here in front of our faces but also not here in actual hand, obviously. The tools are good and improving. They work for certain things, some of the time, with various need for manual stewarding in the hands of people who really know what they're doing. This is real. But it remains an absolutely epic leap from here to the idea that writing code per se is a skill nobody needs any more. More broadly, I don't even really understand what that could possibly mean on a practical level, as code is just instructions for what the software should do. You can express instructions on a higher level, and tooling keeps making that more and more possible (AI and otherwise), but in the end what does it mean to abstract fully away from the instruction in the detail? It seems really clear that will never be able to result in getting software that does what you want in a precise way rather than some probabilistic approximation which must be continually corrected. I think the real craft of software such that there is one is constructing systems of deterministic logic flows to make things happen in precisely the way we want them to. Whatever happens to tooling, or what exactly we call code or whatever, that won't change. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | cheema33 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
> an amazing future of perfect code from agentic whatevers will come to fruition... Nobody credible is promising you a perfect future. But, a better future, yes! If you do not see it, then know this. You have your head firmly planted in the sand and are intentionally refusing to see what is coming. You may not like it. You may not want it. But it is coming and you will either have to adapt or become irrelevant. Does Copilot spit out useless PR comments. 100% yes! Are there tools that are better than Copilot? 100% yes! These tools are not perfect. But even with their imperfections, they are very useful. You have to learn to harness them for their strengths and build processes to address their weaknesses. And yes, all of this requires learning and experimentation. Without that, you will not get good results and you will complain about these tools not being good. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | charcircuit 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
>It's utility was never "six months away," 6 months ago is when my coding became 100% done by AI. The utility already has been there for a while. >I didn't have to be compelled to use it—I was eager to use it of my own volition as often as possible. The difference is that you were a kid then with an open mind and now your world view has fixed into a certain way the world works and how things should be done. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | Lerc 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Can you point to the most optimistic six month projections that you have seen? I have encountered a lot of people say it will be better in six months, and every six months It has been. I have also seen a few predictions that say 'in a year or two they will be able to do a job completely. I am sceptical, but I would say such claims are rare. Dario Amodei has been about the only prominent voice that I have encountered that puts such abilities on a very short timeframe, and he still points to more than a year. The practical use of AI has certainly increased a lot in the last six months. So I guess what I'm asking is more specifics on what you feel was claimed, by whom, and how much did they fall short? Without that supporting evidence you could just be being annoyed by the failure of claims that exist in your imagination. | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | esafak 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
If you've only experienced MS Copilot I invite you to try the latest models through Codex (free deals ongoing), Claude Code, or Opencode. You may be surprised, for better or worse. What kind of software do you do? | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | Xenoamorphous 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
> LLM coding doesn't feel revolutionary or exciting like this. Maybe you’re just older. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | godelski 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Reminds me of another "just around the corner" promise...[0]I think it is one thing for the average person to buy into the promises but I've yet to understand why that happens here. Or why that happens within our community of programmers. It is one thing for non-experts to fall for obtuse speculative claims, but it is another for experts. I'm excited for autonomous vehicles, but in 2016 is was laughable to think they're around the corner and only 10 years later does such a feat seem to start looking like it's actually a few years away. Why do we only evaluate people/claims on their hits and not their misses? It just encourages people to say anything and everything, because eventually one will be right. It's 6 months away because eventually it will actually be 6 months away. But is it 6 months away because it is actually 6 months away or because we want it to be? I thought the vibe coder's motto is "I just care that it works." Honestly, I think that's the problem. Everyone care's about if it works or not and that's the primary concern of all sides of the conversation here. So is it 6 months away because it is 6 months away or is it 6 months away because you've convinced yourself it is 6 months away. You got good reasons for believing that, you got the evidence, but evidence for a claim is meaningless without comparing to evidence that counters the claim. [0] https://en.wikipedia.org/wiki/List_of_predictions_for_autono... | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | guytv 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
you’re probably not doing it right. I’ve been programming since 1984. OP basically described my current role with scary precision. I mostly review the AI’s code, fix the plan before it starts, and nudge it in the right direction. Each new model version needs less nudging — planning, architecture, security, all of it. There’s an upside. There’s something addictive about thinking of something and having it materialize within an hour. I can run faster and farther than I ever could before. I’ve rediscovered that I just like building things — imagining them and watching them come alive — even if I’m not laying every brick myself anymore. But the pace is brutal. My gut tells me this window, where we still get to meaningfully participate in the process, is short. That part is sad, and I do mourn it quite a bit. If you think this is just hype, you’re doing it wrong. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | Alex-Programs 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
The state of the art is moving so rapidly that, yeah, Copilot by Microsoft using gpt-5-mini:low is not going to be very good. And there are many places where AI has been implemented poorly, generally by people who have the distribution to force it upon many people. There are also plenty of people who use vibe coding tools and produce utterly atrocious codebases. That doesn't preclude the existence of effective AI tools, and people who are good at using them. | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | levzettelin 3 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Well said! | |||||||||||||||||||||||||||||||||||||||||||||||