▲ | abhaynayar 4 days ago | |
At the moment I just don't see AI in its current state or future trajectory as a threat to jobs. (Not that there can't be other reasons why jobs are getting harder to get). Predictions are hard, and breakthroughs can happen, so this is just my opinion. Posting this comment as a record to myself on how I feel of AI - since my opinion on how useful/capable AI is has gone up and down and up and down again over the last couple of years. Most recently down because I worked on two separate projects over the last few weeks with the latest models available on GitHub Copilot Pro. (GPT-5, Claude Sonnet 4, Gemini 2.5 Pro, and some lesser capable ones at times as well). Trying the exact same queries for code changes across all three models for a majority of the queries. I saw myself using Claude most, but it still wasn't drastically better than others, and still made too many mistakes. One project was a simple health-tracking app in Dart/Flutter. Completely vibe-coded, just for fun. I got basic stuff to start working. Over the days I kept finding bugs as I starting using it. Since I truly wanted to use this app in my daily life, at one point I just gave up cause fixing the bugs was getting way too annoying. Most "fixes" as I later got into the weeds of it, were wrong, with wrong assumptions, made changes that seemed to fix the problem at the surface but introducing more bugs and random garbage, despite giving a ton of context and instructions on why things are supposed to be a certain way, etc. I was constantly fighting with the model. Would've been much easier to do much more on my own and using it a little bit. Another project was in TypeScript, where I did actually use my brain, not just vibe-coded. Here, AI models were helpful because I mostly used them to explain stuff. And did not let them make more than a few lines of code changes at most at a time. There was a portion of the project which I kinda "isolated" which I completely vibe-coded and I don't mind if it breaks or anything as it is not critical. It did save me some time but I certainly could've done it on my own with a little more time, while having code that I can understand fully well and edit. So the way I see using these models right now is for research/prototyping/throwaway kind of stuff. But even in that, I literally had Claude 4 teach me something wrong about TypeScript just yesterday. It told me a certain thing was deprecated. I made a follow up question on why that thing is deprecated and what's used instead, it replied with something like "Oops! I misspoke, that is not actually true, that thing is still being used and not deprecated." Like, what? Lmao. For how many things have I not asked a follow up and learnt stuff incorrectly? Or asked and still learnt incorrectly lmao. I like how straightforward GPT-5 is. But apart from that style of speech I don't see much other benefit. I do love LLMs for personal random searches like facts/plans/etc. I just ask the LLM to suggest me what to do just to rubber duck or whatever. Do all these gains add up towards massive job displacement? I don't know. Maybe. If it is saving 10% time for me and everyone else, I guess we do need 10% less people to do the same work? But is the amount of work we can get paid for fixed and finite? Idk. We (individuals) might have to adapt and be more competitive than before depending on our jobs and how they're affected, but is it a fundamental shift? Are these models or their future capabilities human replacements? Idk. At the moment, I think they're useful but overhyped. Time will tell though. |