| ▲ | dudewhocodes 4 hours ago |
| > I am at the tail end of AI adoption, so I don’t expect to say anything particularly useful or novel. Are they really late? Has everyone started using agents and paying $200 subscriptions? Am I the one wrong here or these expressions of "falling behind" are creating weird FOMO in the industry? EDIT: I see the usefulness of these tools, however I can't estimate how many people use them. |
|
| ▲ | NothingAboutAny 4 hours ago | parent | next [-] |
| >Has everyone started using agents and paying $200 subscriptions? If anything in my small circle the promise is waning a bit, in that even the best models on the planet are still kinda shitty for big project work.
I work as a game dev and have found agents to only be mildly useful to do more of what I've already laid out, I only pay for the $100 annual plan with jetbrains and that's plenty.
I haven't worked at a big business in a while, but my ex-coworkers are basically the same. a friend only uses chat now because the agents were "entirely useless" for what he was doing. I'm sure someone is getting use out of them making the 10 billionth node.js express API, but not anyone I know. |
| |
| ▲ | bunderbunder 3 hours ago | parent | next [-] | | I’m using it for scripts to automate yak shaving type tasks. But for code that’s expected to last, folks where I work are starting to get tired of all the early 2000s style code that solves a 15 LOC problem in 1000 lines through liberal application of enterprise development patterns. And, worse, we’re starting to notice an uptick in RCA meetings where a contributing factor was freshman errors sailing through code review because nobody can properly digest these 2,000 line pull requests at anywhere near the pace that Claude Code can generate them. That would be fine if our value delivery rate were also higher. But it isn’t. It seems to actually be getting worse, because projects are more likely to get caught in development hell. I believe the main problem there is poorer collective understanding of generated code, combined with apparent ease of vibecoding a replacement, leads to teams being more likely to choose major rewrites over surgical fixes. For my part, this “Duke Nukem Forever as a Service” factor feels the most intractable. Because it’s not a technology problem, it’s a human psychology problem. | |
| ▲ | agumonkey 3 hours ago | parent | prev [-] | | yeah it seems the usual front/back complexity is well in the training corpus of gemini and you get good enough output |
|
|
| ▲ | rootnod3 3 hours ago | parent | prev | next [-] |
| Definitely FOMO. I have tried it once or twice and saw absolutely zero value in it. I will stick to writing the code by hand, even if it's "boring" parts. If I have to sit down and review it anyway, I can also go and write it myself. Especially considering that these 200$ subscriptions are just the start because those companies are still mostly operating at a loss. It's either going to be higher fees or Ads pushed into the responses. Last I need is my code sprinkled with Ads. |
| |
| ▲ | RobinL 3 hours ago | parent | next [-] | | > saw absolutely zero value in it At the very least, it can quickly build throwaway productivity enhancing tools. Some examples from building a small education game:
- I needed to record sound clips for a game. I vibe coded a webapp in <15 mins that had a record button, keyboard shortcuts to progress though the list of clips i needed, and outputted all the audio for over 100 separate files in the folder structure and with the file names i needed, and wrote the ffmpeg script to post process the files - I needed json files for the path of each letter. gemini 3 converted images to json and then codex built me an interactive editor to tidy up the bits gemini go wrong by hand The quality of the code didn't matter because all i needed was the outputs. The final games can be found:
https://www.robinlinacre.com/letter_constellations
https://www.robinlinacre.com/bee_letters/
code: https://github.com/robinL/ | |
| ▲ | brokencode 3 hours ago | parent | prev | next [-] | | So using something once or twice is plenty to give it a fair shake? How long did it take to learn how to use your first IDE effectively? Or git? Or basically any other tool that is the bedrock of software engineering. AI fools people into thinking it should be really easy to get good results because the interface is so natural. And it can be for simple tasks. But for more complex tasks, you need to learn how to use it well. | | |
| ▲ | kemotep 2 hours ago | parent | next [-] | | So is it strictly necessary to sign up for the 200 a month subscription? Because every time, without fail, the free ChatGPT, Copilot, Gemini, Mistral, Deepseek whatever chatbots, do not write PowerShell faster than I do. They “type” faster than me, but they do not type out correct PowerShell. Fake modules, out of date module versions, fake options, fake expectations of object properties. Debugging what they output makes them a significant speed down compared to just, typing and looking up PowerShell commands manually and using the -help and get-help functions in my terminal. But again, I haven’t forked over money for the versions that cost hundreds of dollars a month. It doesn’t seem worth it, even after 3 years. Unless the paid version is 10 times smarter with significantly less hallucinations the quality doesn’t seem worth the price. | | |
| ▲ | Aurornis 31 minutes ago | parent | next [-] | | > So is it strictly necessary to sign up for the 200 a month subscription? No, the $20/month plans are great for minimal use > Because every time, without fail, the free ChatGPT, Copilot, Gemini, Mistral, Deepseek whatever chatbots, do not write PowerShell faster than I do. The exact model matters a lot. It's critical to use the best model available to avoid wasting time. The free plans generally don't give you the best model available. If they do, they have limited thinking tokens. ChatGPT won't give you the Codex (programming) model. You have to be in the $20/month plan or a paid trial. I recommend setting it to "High" thinking. Anthropic won't give you Opus for free, and so on. You really have to use one of the paid plans or a trial if you want to see the same thing that others are seeing. | |
| ▲ | azuanrb 2 hours ago | parent | prev | next [-] | | Not necessary. I use Claude/Chatgpt ~$20 plan. Then you'll get access to the cli tools, Claude Code and Codex. With web interface, they might hallucinate because they can't verify it. With cli, it can test its own code and keep iterating on it. That's one of the main difference. | |
| ▲ | johnfn an hour ago | parent | prev [-] | | No, it's not necessary to pay 200/mo. I haven't had an issue with a hallucination in many months. They are typically a solved problem if you can use some sort of linter / static analysis tool. You tell the agent to run your tool(s) and fix all the errors. I am not familiar with PowerShell at all, but a quick GPT tells me that there is PSScriptAnalyzer, which might be good for this. That being said, it is possible that PowerShell is too far off the beaten path and LLMs aren't good at it. Try it again with something like TypeScript - you might change your mind. |
| |
| ▲ | drw85 2 hours ago | parent | prev [-] | | It can also backfire and sometimes give you absolute made-up nonsense.
Or waste your whole day moving in a circle around a problem. |
| |
| ▲ | kibwen 3 hours ago | parent | prev [-] | | Good news, if you upgrade to our $300 plan you can avoid all ads, which will instead be injected into the code that you ship to your users. |
|
|
| ▲ | Aurornis 36 minutes ago | parent | prev | next [-] |
| > Are they really late? Has everyone started using agents and paying $200 subscriptions? The $20/month subscriptions go a long way if you're using the LLM as an assistant. Having a developer in the loop to direct, review, and write some of the code is much more token efficient than trying to brute force it by having the LLM try things and rewrite until it looks like what you want. If you jump to the other end of the spectrum and want to be in the loop as little as possible, the $100/$200 subscriptions start to become necessary. My primary LLM use case is as a hyper-advanced search. I send the agent off to find specific parts of a big codebase I'm looking for and summarize how it's connected. I can hit the $20/month windowed limits from time to time on big codebases, but usually it's sufficient. |
|
| ▲ | rdiddly an hour ago | parent | prev | next [-] |
| I can't figure out if I'm at the tail end of adoption, or the leading edge of disillusionment. I guess being able to say where you are in relation to the herd, depends on knowing where the herd is and which way it's headed. Which I don't know. All I know is, it seems to take longer to write the prompt, wait for the output, and then verify/correct the output, iteratively mind you, than to just write the goddamn code. And said process, in addition to being equal or longer, is also boring as fuck the entire time, and deeply annoying about half the time. Nobody is pressuring me to use it, but if this is the future, then I'm ready to change to a different career where I actually enjoy the work. |
|
| ▲ | mixermachine 3 hours ago | parent | prev | next [-] |
| Regarding the $200 subscription.
For Claude Code with Opus (and also Sonnet) you need that, yes. I had ChatGPT Codex GPT5.2 high reasoning running on my side project for multiple hours the last nights.
It created a server deployment for QA and PROD + client builds.
It waited for the builds to complete, got the logs from Github Actions and fixed problems.
Only after 4 days of this (around 2-4 hours) active coding I reached the weekly limit for the ChatGPT Plus Plan (23€).
Far better value so far. To be fully honest, it fucked up one flyway script. I have to fix this now my self :D. Will write a note in the Agent.md to never alter existing scripts.
But the work otherwise was quite solid and now my server is properly deployed.
If I would switch between High reasoning for Planing and Middle reasoning for coding, I would get even more usage. |
| |
| ▲ | moron4hire 2 hours ago | parent [-] | | > ChatGPT Codex GPT5.2 high reasoning "... brought to you by Costco." But seriously, I can't help but think that this proliferation of massive numbers of iterations on these models and productizations of the models is an indication that their owners have no idea what they are doing with any of it. They're making variations and throwing them against the wall to see what sticks. | | |
| ▲ | Aurornis 29 minutes ago | parent [-] | | It's really not that hard. Codex = The model trained specifically for programming tasks. You want this if you're writing code. GPT5.2 = The current version. You don't have to think about this, you just use the latest. High Reasoning = A setting you select for balancing between longer thinking time or quicker answers. It's usually set and forget. |
|
|
|
| ▲ | CurleighBraces 4 hours ago | parent | prev | next [-] |
| I've paid, but I am usually quick to adopt/trial things like this. I think for me it's a case of fear of being left behind rather than missing out. I've been a developer for over 20 years, and the last six months has blown me away with how different everything feels. This isn't like JQuery hitting the scene, PHP going OO or one of the many "this is a game changer" experiences if I've had in my career before. This is something else entirely. |
| |
| ▲ | rootnod3 3 hours ago | parent | next [-] | | Just because it feels faster or are you actually satisfied with the code that is being churned out? And what about long term prospects of maintaining said code? | | |
| ▲ | CurleighBraces 3 hours ago | parent | next [-] | | Let's put it this way, I don't think AI will take my job/career away until company owners are also prepared to also let it handle being on-call. I still very accountable for the code produced. I basically have two modes 1. "Snipe mode" I need to solve problem X, here I fire up my IDE, start codex up and begin prompting to find the bug fix. Most of the time I have enough domain context about the code that once it's found and fixed the issue it's trivial for to reconcile that it's good code and I am shipping it. I can be sniping several targets at anyone time. Most of my day-to-day work is in snipe mode. 2. "Feature mode" This is where I get agents to build features/apps, I've not used this mode in anger for anything other than toy/side projects and I would not be happy about the long term prospects of maintaining anything I've produced. It's stupidly stupidly fun/addictive and yes satisfying! :) I rebuilt a game that I used to play when I was 11 and still had a small community of people actively wanting to play it, entirely by vibe coding, it works, it's live and honestly I've had some of the most rewarding feedback from making that I've had in my career from complete strangers! I've also built numerous tools for myself and my kids that I'd never of had time to build before, and I now can. Again the level of reward for building apps etc that my kids ( and their friends ) are using, is very different from anything I've been career wise. | | |
| ▲ | jannyfer 3 hours ago | parent | next [-] | | You must share that game. I don’t even know what it is and I want to play it! | | | |
| ▲ | esafak 3 hours ago | parent | prev [-] | | If your job is going to be reduced to ops it's a different job. | | |
| ▲ | CurleighBraces 3 hours ago | parent [-] | | Ah, sorry, that wasn't the point I was trying to make. I think ultimately I've succumbed to the fact that writing code is no longer a primary aspect of my job. Reading/reviewing and being accountable for code that something else is written very much is. |
|
| |
| ▲ | vidarh 3 hours ago | parent | prev [-] | | I'm currently testing Claude Code for a project where it isn't coding. But the workflows built with it are now making me money after ~2 weeks, and I've previously done the same work manually, so I know the turnaround time: The turnaround for each deliverable is ~2 days with Claude and the fastest I've ever done it manually was 21 days. (Yes, I'm being intentionally vague - there isn't much of a moat for that project given how close Claude gets with very little prompting) There are absolutely maintainability challenges. You can't just tell these tools to build X and expect to get away with not reviewing the output and/or telling it to revise it. But if you loosen the reigns and review finished output rather than sit there and metaphorically look over its shoulder for every edit, the time it takes me to get it to revise its work until the quality is what I'd expect of myself is still a tiny fraction of what it'd take me to do things manually. The time estimate above includes my manual time spent on reviews and fixes. I expect that time savings to increase, as about half of the time I spend on this project now is time spent improving guardrails and adding agents etc. to refine the work automatically before I even glance at the output. The biggest lesson for me is that when people are not getting good results, most of the time it seems to me it is when people keep watching every step their agent takes, instead of putting in place a decent agent loop (create a plan for X; for each item on the plan: run tests until it works, review your code and fix any identified issues, repeat until the tests and review pass without any issues) and letting the agent work until it stops before you waste time reviewing the result. Only when the agent repeatedly fails to do an assigned task adequately do I "slow it down" and have it do things step by step to figure out where it gets stuck / goes wrong. At which point I tell it to revise the agents accordingly, and then have it try again. It's not cost effective to have expensive humans babysit cheap LLMs, yet a lot of people seem to want to babysit the LLMs. |
| |
| ▲ | AstroBen 3 hours ago | parent | prev [-] | | Its blown me away also I'm also fairly confident having it write my code is not a productivity boost, at least for production work I'd like to maintain long term |
|
|
| ▲ | jjice 3 hours ago | parent | prev | next [-] |
| > Are they really late? Has everyone started using agents and paying $200 subscriptions? No, most programmers I know outside of my own work (friends, family, and old college pals) don't use AI at all. They just don't care. I personally use Cursor at work and enjoy it quite a bit, but I think the author is maybe at the tail end of _their circle's_ adoption, but not the industry's. |
|
| ▲ | sodapopcan 3 hours ago | parent | prev | next [-] |
| I do not pay for any AI nor does my employer pay for it on my behalf. It will stay this way for as long as I can make that work while remaining employed. |
| |
| ▲ | quijoteuniv 3 hours ago | parent | next [-] | | What kind of work do you do? | | | |
| ▲ | PlatoIsADisease 3 hours ago | parent | prev [-] | | Are you a programmer? The $20/mo I pay is quite affordable given the ROI. I could see jumping between various free models. | | |
| ▲ | wongarsu 3 hours ago | parent | next [-] | | With the $20/month claude subscription I frequently run into the session limit in after-work hobby projects. If the majority of your dayjob is actual programming (and not people management, requirements engineering, qa, etc, which is admittedly the reality of many "developer" jobs) the $200/month version seems almost required to have a productive coding assistant | | |
| ▲ | Aurornis 26 minutes ago | parent | next [-] | | The $20/month will go fast if you're trying to drive the LLM to do all the coding. It also goes very fast if you don't actively manage your context by clearing it frequently for new tasks and keeping key information in a document to reference each session. Claude will eat through context way too fast if you just let it go. For true vibecoding-style dev where you just prompt the LLM over and over until things are done, I agree that $100 or $200 plans would be necessary though. | |
| ▲ | rhines an hour ago | parent | prev [-] | | How are you using it? I'm curious if you hit the limit so quickly because you're running it with Claude Code and so it's loading your whole project into its context, making tons of iterations, etc., or if you're using the chat and just asking focused questions and having it build out small functions or validate code quality of a file, and still hitting the limit with that. Not because I think either way is better, just because personally I work well with AI in the latter capacity and have been considering subscribing to Claude, but don't know how limiting the usage limits are. |
| |
| ▲ | sodapopcan 3 hours ago | parent | prev | next [-] | | I am. I use Deepseek and free-tier ChatJippity and as a sometimes-better search. EDIT: I also wasn't going to say it but it's not about the money for me, I just don't want to support any of these companies. I'm happy waste their resources for my benefit but I don't lean on it too often. | | |
| ▲ | PlatoIsADisease 2 hours ago | parent [-] | | Well that's your problem, you are using Deepseek. Its not even SOTA open source anymore, let alone competitive with GPT/Gemini/Grok. | | |
| ▲ | sodapopcan 2 hours ago | parent [-] | | ¯\_(ツ)_/¯ Wasn't my point. | | |
| ▲ | PlatoIsADisease an hour ago | parent [-] | | But this matters for your usage of LLMs. I couldnt use GPT3 for coding and deepseek is at GPT3 + COT levels. | | |
| ▲ | sodapopcan an hour ago | parent [-] | | You're a little too focused on my dig about it being a "sometimes better search" which is fair. I'm not going to be sending money every month to billion dollars companies who capitulate to a goon threatening to annex my country. I accept whatever consequences that has on my programming career. |
|
|
|
| |
| ▲ | njhnjh 3 hours ago | parent | prev [-] | | [dead] | | |
|
|
|
| ▲ | Insanity 3 hours ago | parent | prev | next [-] |
| I have a really simple app that I asked various models to build, but it requires interacting with an existing website. (“Scrape kindle highlights from the kindle webpage, store it in a database, and serve it daily through an email digest”). No success so far in getting it to do so without a lot of handholding and manually updating the web scraping logic. It’s become something of a litmus test for me. So, maybe there is some FOMO but in my experience it’s a lot of snake oil. Also at work, I manage a team of engineers and like 2 out of 12 clearly submit AI generated code. Others stopped using it, or just do a lot more wrangling of the output. |
|
| ▲ | kibwen 4 hours ago | parent | prev | next [-] |
| For the past 20 years the population of the internet has been increasingly sorted into filter bubbles, designed by media corporations which are incentivized to use dark patterns and addictive design to hijack the human brain by weaponizing its own emotions against and creating the illusion of popular consensus. To suggest that someone who has been vibecoding for only a few months is at the tail end of mass adoption is to reveal that one's brain has been pickled by exposure to Twitter. These tools are still extremely undercooked; insert the "meet potential man" meme here. |
|
| ▲ | giancarlostoro 4 hours ago | parent | prev | next [-] |
| Is it FOMO if for $100 a month you can build things that takes months, and then refine them and polish them, test them, and have them more stable than most non-AI code has been for the last decade plus? I blame Marketing Driven development for why software has gone downhill. Look at Windows as a great example. "We can fix that later" is a lie, but not with a coding agent. You can fix it now. |
| |
| ▲ | anonymous908213 4 hours ago | parent | next [-] | | > Is it FOMO if for $100 a month you can build things that takes months It is the very definition of FOMO if there is an entire cult of people telling you that for a year, and yet after a year of hearing about how "everything has changed", there is still not a single example of amazing vibe-coded software capable of replacing any of the real-world software people use on a daily basis. Meanwhile Microsoft is shipping more critical bugs and performance regressions in updates than ever while boasting about 40% of their code being LLM-generated. It is especially strange to cite "Windows as a great example" when 2025 was perhaps one of the worst years I can remember for Windows updates despite, or perhaps because of, LLM adoption. | | |
| ▲ | drw85 2 hours ago | parent [-] | | For MS, it's currently eroding through every single one of their products. Azure, Office, Visual Studio, VS Code, Windows are all shipping faster than ever, but so much stuff is unfinished, buggy, incompatible to existing things, etc. |
| |
| ▲ | CodeMage 3 hours ago | parent | prev [-] | | "We can fix it later" is not the staple of Marketing Driven Development. It's not why Windows has been getting more user-hostile and invasive, why its user experience has been getting worse and worse. Enshittification is not primarily caused by "we can fix it later", because "we can fix it later" implies that there's something to fix. The changes we've seen in Windows and Google Search and many other products and services are there because that's what makes profit for Microsoft and Google and such, regardless of whether it's good for their users or not. You won't fix that with AI. Hell, you couldn't even fix Windows with AI. Just because the company is making greedy, user-hostile decisions, it doesn't mean that their software is simple to develop. If you think Windows will somehow get better because of AI, then you're oversimplifying to an astonishing degree. |
|
|
| ▲ | nurettin 3 hours ago | parent | prev | next [-] |
| A bulk of developers are probably on claude $20 or cursor waiting for their company to pay up. |
|
| ▲ | ryanSrich 3 hours ago | parent | prev [-] |
| When fortune 500, 100 and 50 organizations are buying AI coding tools at scale (I know from personal exp), then I would say you're late. So yes. Late stage adoption for this wave. |