Remix.run Logo
I don't care how well your "AI" works(fokus.cool)
203 points by todsacerdoti 5 hours ago | 252 comments
easterncalculus 15 minutes ago | parent | next [-]

> I find it particularly disillusioning to realize how deep the LLM brainworm is able to eat itself even into progressive hacker circles.

That's the thing, hacker circles didn't always have this 'progressive' luddite mentality. This is the culture that replaced hacker culture.

I don't like AI, generally. I am skeptical of corporate influence, I doubt AI 2027 and so-called 'AGI'. I'm certain we'll be "five years away" from superintelligence for the forseeable future. All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this. It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people. These people bring way more toxicity to daily life than who they wage their campaigns against.

bgwalter a few seconds ago | parent | next [-]

Being anti "AI" has nothing to do with being progressive. Historically, hackers have always rejected bloated tools, especially those that are not under their control and that spy on them and build dossiers like ChatGPT.

Hackers have historically derided any website generators or tools like ColdFusion[tm] or VisualStudio[tm] for that matter.

It is relatively new that some corporate owned "open" source developers use things like VSCode and have no issues with all their actions being tracked and surveilled by their corporate masters.

Please do no co-opt the term "hacker".

poszlem 3 minutes ago | parent | prev [-]

> That's the thing, hacker circles didn't always have this 'progressive' luddite mentality. This is the culture that replaced hacker culture.

People who haven't lived through the transition will likely come here to tell you how wrong you are, but you are 100% correct.

TrackerFF an hour ago | parent | prev | next [-]

I get that some people want to be intellectually "pure". Artisans crafting high-quality software, made with love, and all that stuff.

But one emerging reality for everyone should be that businesses are swallowing the AI-hype raw. You really need a competent and understanding boss to not be labeled a luddite, because let's be real - LLMs have made everyone more "productive" on paper. Non-coders are churning out small apps in record pace, juniors are looking like savants with the amount of code and tasks they finish, where probably 90% of the code is done by Claude or whatever.

If your org is blindly data/metric driven, it is probably just a mater of time until managers start asking why everyone else is producing so much, while you're slow?

syllogism 16 minutes ago | parent | next [-]

I think LLMs are net helpful if used well, but there's also a big problem with them in workplaces that needs to be called out.

It's really easy to use LLMs to shift work onto other people. If all your coworkers use LLMs and you don't you're gonna get eaten alive. LLMs are unreasonably effective at generating large volumes of stuff that resembles diligent work on the surface.

The other thing is, tools change trade-offs. If you're in a team that's decided to lean into static analysis, and you don't use type checking in your editor, you're getting all the costs and less of the benefits. Or if you're in a team that's decided to go dynamic, writing good types for just your module is mostly a waste of time.

LLMs are like this too. If you're using a very different workflow from everyone else on your team, you're going to end up constantly arguing for different trade-offs, and ultimately you're going to cause a bunch of pointless friction. If you don't want to work the same way as the rest of the team just join a different team, it's really better for everyone.

davidmurdoch 27 minutes ago | parent | prev | next [-]

This just happened to me this week.

I work on the platform everyone builds on top of. A change here can subtlety break any feature, no matter how distant.

AI just can't cope with this yet. So my team has been told that we are too slow.

Meanwhile, earlier this week we halted a roll out because if a bug introduced by AI, as it worked around a privacy feature by just allow listing the behavior it wanted, instead of changing the code to address to policy. It wasn't caught in review because the file that was changed didn't require my teams review (because we ship more slowly, they removed us as code owners for many files recently).

BarryMilo 18 minutes ago | parent [-]

As it was foretold since the beginning, IA use is breaking security wantonly.

Aurornis 8 minutes ago | parent | prev | next [-]

> Non-coders are churning out small apps in record pace, juniors are looking like savants with the amount of code and tasks they finish, where probably 90% of the code is done by Claude or whatever.

Honestly I think you’re swallowing some of the hype here.

I think the biggest advantages of LLMs go to the experienced coders who know how to leverage them in their workflows. That may not even include having the LLM write the code directly.

The non-coders producing apps meme is all over social media, but the real world results aren’t there. All over Twitter there were “build in public” indie non-tech developers using LLMs to write their apps and the hype didn’t match reality. Some people could get minimal apps out the door that kind of talked to a back end, but even those people were running into issues not breaking everything on update or managing software lifecycle.

The top complaint in all of the social circles I have about LLMs is with juniors submitting LLM junk PRs and then blaming the LLM. It’s just not true that juniors are expertly solving tasks with LLMs faster than seniors.

I think LLMs are helpful and anyone senior isn’t learning how to use them to their advantage (which doesn’t mean telling the LLM what to write and hoping for the best) is missing out. I think people swallowing the hype about non-tech people and juniors doing senior work is getting misled about the actual ways to use these tools effectively.

fileeditview 40 minutes ago | parent | prev | next [-]

The era of software mass production has begun. With many "devs" just being workers in a production line, pushing buttons, repeating the same task over and over.

The produced products however do not compare in quality to other industry's mass production lines. I wonder how long it takes until this comes all crashing down. Software mostly already is not a high quality product.. with Claude & co it just gets worse.

edit: sentence fixed.

afro88 14 minutes ago | parent | next [-]

I think you'll be waiting a while for the "crashing down". I was a kid when manufacturing went off shore and mass production went into overdrive. I remember my parents complaining about how low quality a lot of mass produced things were. Yet for decades most of what we buy is mass produced, comparatively low quality goods. We got used to it, the benefits outweighed the negatives. What we thought mattered didn't in the face of a lot of previously unaffordable goods now broadly available and affordable.

You can still buy high goods made with care when it matters to you, but that's the exception. It will be the same with software. A lot of what we use will be mass produced with AI, and even produced in realtime on the fly (in 5 years maybe?). There will be some things where we'll pay a premium for software crafted with care, but for most it won't matter because of the benefits of rapidly produced software.

We've got a glimpse of this with things like Claude Artifacts. I now have a piece of software quite unique to my needs that simply wouldn't have existed otherwise. I don't care that it's one big js file. It works and it's what I need and I got it pretty much for free. The capability of things like Artifacts will continue to grow and we'll care less and less that it wasn't human produced with care.

lxgr 30 minutes ago | parent | prev [-]

> The era of software mass production has begun.

We've been in that era for at least two decades now. We just only now invented the steam engine.

> I wonder how long it takes until this comes all crashing down.

At least one such artifact of craft and beauty already literally crashed two airplanes. Bad engineering is possible with and without LLMs.

pacifika 2 minutes ago | parent | next [-]

Yeah it’s interesting to see if blaming LLMs becomes as acceptable as “caused by a technical fault” to deflect responsibility from what is a programmer’s output.

Perhaps that’s what lead to a decline in accountability and quality.

knollimar 26 minutes ago | parent | prev | next [-]

There's a buge difference between possible and likely.

Maybe I'm pessimistic but I at least feel like there's a world of difference between a practice that encourages bugs and one that allows them through when there is negligence. The accountability problem needs to be addressed before we say it's like self driving cars outperforming humans. On a errors per line basis, I don't think LLMs are on par with humans yet

lxgr 22 minutes ago | parent [-]

Knowing your system components’ various error rates and compensating for them has always been the job. This includes both the software itself and the engineers working on it.

The only difference is that there is now a new high-throughput, high-error (at least for now) component editing the software.

goldeneas 25 minutes ago | parent | prev [-]

> Bad engineering is possible with and without LLMs

That's obvious. It's a matter of which makes it more likely

AndrewKemendo 19 minutes ago | parent | prev | next [-]

> If your org is blindly data/metric driven

Are there for profit companies (not non profits, research institutes etc…) that are not metric driven?

zwnow 40 minutes ago | parent | prev [-]

> You really need a competent and understanding boss to not be labeled a luddite, because let's be real - LLMs have made everyone more "productive" on paper.

I am actually less productive when using LLMs because now I have to read another entities code and be able to judge wether this fits my current business problem or not. If it doesn't, yay refactoring prompts instead of tackling the actual problem. Also I can write code for free, LLMs coding assistants aren't free. I can fit business problems amd edge cases into my brain given some time, a LLM is unaware about edge cases, legal requirements, decoupled dependencies, potential refactors or the occasional call of boss asking for something to be sneaked into the code right now. If my job forced me to use these tools, congrats, I'll update my address to some hut in a forrest eating cold canned ravioli for the rest of my life because I for sure dont wanna work in a world where I am forced to use dystopian big tech machines I cant look into.

Aurornis 21 minutes ago | parent [-]

> I am actually less productive when using LLMs because now I have to read another entities code and be able to judge wether this fits my current business problem or not.

You don’t have to let the LLM write code for you. They’re very useful as a smart search engine for your code base, a smart refactoring tool, a suggestion generator, and many other ways.

I rarely have LLMs write code for me from scratch that I have to review, but I do give them specific instructions to do what I want to the codebase. They can do it much faster than I can search around the codebase and type out myself.

There are so many ways to make LLMs useful without having them do all the work while you sit back and judge. I think some people are determined to get no value out of the LLM because they feel compelled to be anti-hype, so they’re missing out on all the different little ways they can be used to help. Even just using it as a smarter search engine (in the modes where they can search and find the right sections of right articles or even GitHub issues for you) has been very helpful. But you have to actually learn how to use them.

> If my job forced me to use these tools, congrats, I'll update my address to some hut in a forrest eating cold canned ravioli for the rest of my life because I for sure dont wanna work in a world where I am forced to use dystopian big tech machines I cant look into.

Okay, good luck with your hut in the forest. The rest of us will move on using these tools how we see fit, which for many of us doesn’t actually include this idea where the LLM is the author of the code and you just ask nicely and reject edits until it produces the exact code you want. The tools are useful in many ways and you don’t have to stop writing your own code. In fact, anyone who believes they can have the LLM do all the coding is in for a bad surprise when they realize that specific hype is a lie.

bgwalter 6 minutes ago | parent | next [-]

Is that why open source progress has generally slowed down since 2023? We keep hearing these promises, and reality shows the opposite.

zwnow 14 minutes ago | parent | prev [-]

> But you have to actually learn how to use them.

This probably is the issue for me, I am simply not willing to do so. To me the whole AI thing is extremely dystopian so even on a professional level I feel repulsed by it.

We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_. Integrating LLMs and using all these tools is just another bridge people depend on at some point.

I want to write software that works, preferably even offline. I want tools that do not spy on me (referring to that new Google editor, forgot the name). Call me once these tools work offline on my 8GB RAM laptop with a crusty CPU and I might put in the effort to learn them.

Aurornis a few seconds ago | parent [-]

> This probably is the issue for me, I am simply not willing to do so.

Thanks for being honest at least. So many HN arguments start as a desire to hate something and then try to bridge that into something that feels like a takedown of the merits of that thing. I think a lot of the HN LLM hate comes from people who simply want to hate LLMs.

> We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_. Integrating LLMs and using all these tools is just another bridge people depend on at some point.

For an experienced dev using LLMs as another tool, an LLM outage isn’t a problem. You just continue coding.

It’s on the level of Google going down so you have to use another search engine or try to remember the URL for something yourself.

The main LLM players are also easy to switch between. I jump between Anthropic, Google, and OpenAI almost month to month to try things out. I could have subscriptions to all 3 at the same time and it would still be cheap.

I think this point is overblown. It’s not a true team dependency like when GitHub stop working a few days back.

embedding-shape 4 hours ago | parent | prev | next [-]

> And yeah, I get it. We programmers are currently living through the devaluation of our craft, in a way and rate we never anticipated possible.

I'm a programmer, been coding professionally for 10 something years, and coding for myself longer than that.

What are they talking about? What is this "devaluation"? I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun), and programmers should be some of the most worry-free individuals on this planet, the job is easy, well-paid, not a lot of health drawbacks if you have a proper setup and relatively easy to find a new job when you need it (granted, the US seems to struggle with that specific point as of late, yet it remains true in the rest of the world).

And now, we're having a huge explosion of tools for developers, to build software that has to be maintained by developers, made by developers for developers.

If anything, it seems like Balmers plea of "Developers, developers, developers" has came true, and if there will be one profession left in 100 year when AI does everything for us (if the vibers are to be believed), then that'd probably be software developers and machine learning experts.

What exactly is being de-valuated for a profession that seems to be continuously growing and been doing so for at least 20 years?

swatcoder an hour ago | parent | next [-]

The "devaluation" they mention is just the correction against the absurd ZIRP bump, that lured would-be doctors and lawyers into tech jobs at FAANG and FAANG-alike firms with the promise of upper middle class lifestyles for trivially weaving together API calls and jockeying JIRA tickets. You didn't have to spend years more in grad school, you didn't have to be a diligent engineer. You just had to had to have a knack for standardized tests (Leetcode) and the time to grid some prep.

The compensation and hiring for that kind of inexpert work were completely out of sync with anything sustainable but held up for almost a decade because money was cheap. Now, money is held much more tightly and we stumbled into a tech that can cheaply regurgitate a lot of so the trivial inexpert work, meaning the bottom fell out of these untenable, overpaid jobs.

You and I may not be effected, having charted a different path through the industry and built some kind of professional career foundation, but these kids who were (irresponsibly) promised an easy upper middle class life are still real people with real life plans, who are now finding themselves in a deeply disappointing and disorienting situation. They didn't believe the correction would come, let alone so suddenly, and now they don't know how they're supposed to get themselves back on track for the luxury lifestyle they thought they legitimately earned.

j4coh 13 minutes ago | parent [-]

I don't believe companies can reliably tell expert and non-expert developers apart, to sort them so efficiently to play out like you say.

abraxas 39 minutes ago | parent | prev | next [-]

You sound exactly like that turkey from Nassim Taleb's books that came to the conclusion that the purpose of human beings is to make turkeys very happy with lots of food and breeding opportunities. And the turkey's thesis gets validated perfectly every day he wakes up to a delicious fatty meal.

Until Thanksgiving.

thunky 2 hours ago | parent | prev | next [-]

> What exactly is being de-valuated for a profession

You're probably fine as a more senior dev...for now.

But if I was a junior I'd be very worried about the longevity I can expect as a dev. It's already easier for many/most cases to assign work to a LLM vs handholding a human through it.

Plus as an industry we've been exploiting our employer's lack of information to extract large salaries to produce largely poor quality outputs imo. And as that ignorance moat gets smaller, this becomes harder to pull off.

spicyusername 2 hours ago | parent | next [-]

> assign work to an LLM

This is just not happening anywhere around me. I don't know why it keeps getting repeated in every one of these discussions.

Every software engineer I know is using LLM tools, but every team around me is still hiring new developers. Zero firing is happening in any circle near me due to LLMs.

LLMs can not do unsupervised work, period. They do not replace developers. They replace Stack Overflow and Google.

neom an hour ago | parent | next [-]

I can tell you where I am seeing it change things for sure, at the early stages. If you wanted to work at a startup I advise or invest in, based on what I'm seeing, it might be more difficult than it was 5 years because there is a slightly different calculus at the early stage. often your go to market and discovery processes seed/pre-seed are either: not working well yet, nonexistent, or decoupled from prod and eng, the goal obviously is over time to bring it all together into a complete system (a business) - as long as I've been around early stage startup there has always been a tension between engineering and growth on budget division, and the dance of how you place resources across them such that they come together well is quite difficult. Now what I'm seeing is: engineering could do with being a bit faster, but too much faster and they're going to be sitting around waiting for the business teams to get their shit together, where as before they would look at hiring a junior, now they will just hire some AI tools, or invest more time in AI scaffolding etc... allowing them to go a little bit faster, but it's understood: not as fast as hiring a jr engineer. I noticed this trend starting in the spring this year, and i've been watching to see if the teams who did this then "graduate" out of it to hiring a jr, so far only one team has hired and it seems they skipped jr and went straight to a more sr dev.

cjbgkagh an hour ago | parent | prev | next [-]

Around 80% of my work is easy while the remaining 20% is very hard. At this stage the hard stuff is far outside the capability of LLM but the easy stuff is very much within its capabilities. I used to hire contractors to help with that 80% work but now I use LLMs instead. It’s far cheaper, better quality, and zero hassle. That’s 3 junior / mid level jobs that are gone now. Since the hard stuff is combinatorial complexity I think by the time LLM is good enough to do that then it’s probably good enough to do just about everything and we’ll be living in an entirely different world.

scarface_74 a minute ago | parent [-]

Exactly this, I lead cloud consulting + app dev projects. Before I would have staffed my projects with at least me leading it and doing the project management + stakeholder meetings and some of the work and bringing a couple of others in to do some of the grunt work. Now with Gen AI even just using ChatGPT and feeding it a lot of context - diagrams I put together, statements of work, etc - I can do it all myself without having to go through the coordination effort of working with two other people.

On the other hand, when I was staffed to lead a project that did have another senior developer who is one level below me, I tried to split up the actual work but it became such a coordination nightmare once we started refining the project because he could just use Claude code and it would make all of the modifications needed for a feature from the front end work, to the backend APIs, to the Terraform and the deployment scripts.

I would have actually slowed him down.

vladimirralev an hour ago | parent | prev | next [-]

Today's high-end LLMs can do a lot of unsupervised work. Debug iterations are at least junior level. Audio and visual output verification is still very week (i.e. to verify web page layout and component reactivity). Once the visual model is good enough to look at the screen pixels and understand, it will instantly replace junior devs. Currently if you have only text output all new LLMs can iterate flawlessly and solve problems on it. New backend dev from scratch is completely doable with vibe coding now, with some exceptions around race conditions and legacy code comprehension.

raw_anon_1111 22 minutes ago | parent | prev | next [-]

Well your anecdote is clearly at odds with absolutely all of the macro economic data.

grumbel 2 hours ago | parent | prev | next [-]

> This is just not happening anywhere around me.

Don't worry about where AI is today, worry about where it will be in 5-10 years. AI is brand new bleeding edge technology right now, and adaption always takes time, especially when the integration with IDEs and such is even more bleeding edge than the underlying AI systems themselves.

And speaking about the future, I wouldn't just worry about it replacing the programmer, I'd worry about it replacing the program. The future we are heading into might be one where the AI is your OS. If you need an app to do something, you can just make it up on the spot, a lot of classic programs will no longer need to exist.

danaris an hour ago | parent [-]

> Don't worry about where AI is today, worry about where it will be in 5-10 years.

And where will it be in 5-10 years?

Because right now, the trajectory looks like "right about where it is today, with maybe some better integrations".

Yes, LLMs experienced a period of explosive growth over the past 5-8 years or so. But then they hit diminishing returns, and they hit them hard. Right now, it looks like a veritable plateau.

If we want the difference between now and 5-10 years from now and the difference between now and 5-10 years ago to look similar, we're going to need a new breakthrough. And those don't come on command.

CuriouslyC an hour ago | parent | next [-]

Right about where it is today with better integrations?

One year is the difference between Sonnet 3.5 and Opus 4.5. We're not hitting diminishing returns yet (mostly because of exponential capex scaling, but still). We're already committed to ~3 years of the current trajectory, which means we can expect similar performance boosts year over year.

The key to keep in mind is that LLMs are a giant bag of capabilities, and just because we hit diminishing returns on one capability, that doesn't say much if anything about your ability to scale other capabilities.

catlifeonmars 2 minutes ago | parent [-]

[delayed]

lupire 30 minutes ago | parent | prev | next [-]

It's a trope that people say this and then someone points out that while the comment was being drafted another model or product was released that took a substantial step up on problem solving power.

enraged_camel 15 minutes ago | parent | prev [-]

I use LLMs all day every day. There is no plateau. Every generation of models has resulted in substantial gains in capability. The types of tasks (both in complexity and scope) that I can assign to an LLM with high confidence is frankly absurd, and I could not even dream of it eight months ago.

chud37 an hour ago | parent | prev | next [-]

Completely agree. I use LLM like I use stackoverflow, except this time i get straight to the answer and no one closes my question and marks it as a duplicate, or stupid.

I dont want it integrated into my IDE, i'd rather just give it the information it needs to get me my result. But yeah, just another google or stackoverflow.

carrychains an hour ago | parent | prev | next [-]

It's me. I'm the LM having work assigned to me that junior dev used to get. I'm actually just a highly proficient BA who has always almost read code, followed and understood news about software development here and on /. before, but generally avoided writing code out of sheer laziness. It's always been more convenient to find something easier and more lucrative in those moments if decision where I actually considered shifting to coding as my profession.

But here I am now. After filling in for lazy architects above me for 20 years while guiding developers to follow standards and build good habits and learning important lessons from talking to senior devs along the wa, guess what, I can magically do it myself now. The LM is the junior developer that I used to painstakingly explain the design to, and it screws it up half as much as the braindead and uncaring jr Dev used to. Maybe I'm not a typical case, but it shows a hint of where things might be going. This will only get easier as the tools become more capable and mature into something more reliable.

chrisweekly an hour ago | parent [-]

LM?

queenkjuul 2 hours ago | parent | prev [-]

You're mostly right but very few teams are hiring in the grand scheme of things. The job market is not friendly for devs right now (not saying that's related to AI, just a bad market right now)

HarHarVeryFunny 42 minutes ago | parent | prev | next [-]

> But if I was a junior I'd be very worried about the longevity I can expect as a dev. It's already easier for many/most cases to assign work to a LLM vs handholding a human through it.

This sounds kind of logical, but really isn't.

In reality you can ASSIGN a task to a junior dev and expect them to eventually complete it, and learn from the experience as well. Sure there'll likely be some interaction between the junior dev and mentor, and this is part of the learning process - something DESIREABLE since it leads to the developer getting better.

In contrast, you really cant "assign" something to an LLM. You can of course try to, and give it some "vibe coding" assignment like "build me a backend component to read the data from the database", but the LLM/agent isn't an autonomous entity that can take ownership of the assignment and be expected to do whatever it takes (e.g. coming back to you and asking for help) to get it done. With todays "AI" technology it's the AI that needs all the handholding, and the person using the AI is the one who has effectively taken the assignment, not the LLM.

Also, given the inability of LLMs to learn on the job, using an LLM as a tool to help get things done is going to be a groundhog day experience of having to micro-manage the process in the same way over and over again each time you use it... time that would have been better invested in helping a junior dev get up to speed and in the future be an independent developer that tasks can indeed be assigned to.

enraged_camel 10 minutes ago | parent | next [-]

>> e.g. coming back to you and asking for help

Funny you mention this because Opus 4.5 did this just yesterday. I accidentally gave it a task with conflicting goals, and after working through it for a few minutes it realized what was going on, summarized the conflict and asked me which goal should be prioritized, along with detailed pros and cons of each approach. It’s exactly how I would expect a mid level developer to operate, except much faster and more thorough.

lupire 32 minutes ago | parent | prev [-]

Doesn't matter. First, yes, a modern AI will come back and ask questions. Second, the AI is so much faster at interactions than a human is, that you can use that saved time to glance at its work and redirect it. The AI will come back with 10 prototype attempts in an hour, while a human will take a week for each, with more interupt questions for you about easy things

HarHarVeryFunny 12 minutes ago | parent [-]

Sure, LLMs are a useful tool, and fast, but the point is they don't have human level intelligence, can't learn, and are not autonomous outside of an agent that will attempt to complete a narrow task (but with no ownership and guarantee of eventual success).

We'll presumably get there eventually and build "artificial humans", but for now what we've got is LLMs - tools for language task automation.

If you want to ASSIGN a task to something/someone then you need a human or artificial human. For now that means assigning the task to a human, who will in turn use the LLM as a tool. Sure there may be some productivity increase (although some studies have indicated the exact opposite), but ultimately if you want to be able to get more work done in parallel then you need more entities that you can assign tasks do, and for time being that means humans.

walt_grata an hour ago | parent | prev | next [-]

LLMs vs human

Handholding the human pays off in the long run more than hand holding the llm, which requires more hand holding anyway.

Claude doesn't get better as I explain concepts to it the same way a jr engineer does.

cjbgkagh 34 minutes ago | parent | next [-]

I had hired 3 junior/mid lvl devs and paid them to do nothing but study to improve their skills, it was my investment in their future, I had a big project on the horizon that I needed help with. After 6 months I let them go, the improvement was far too slow. Books that should have taken a week to get through were taking 6 weeks. Since then LLM have completely surpassed them. I think it’s reasonable to think that some day, maybe soon, LLMs will surpass me. Like everyone else, I have to the best I can while I can.

eithed 12 minutes ago | parent [-]

But this is an issue of worker you're hiring. I've worked with senior engineers who a) did nothing (as - really not write any thing within the sprint) b) worked on things they wanted to work on c) did ONLY things that they were assigned in the sprint (= if there were 10 tickets in the sprint and they were assigned 1 of these tickets then they would finish that ticket and not pick up anything else) d) worked only on tickets that have requirements explicitly stated step by step (open file a, change line 89 to be `checkBar` instead of `checkFoo`... - having to write this would take longer than doing the changes yourself as I was really writing in Jira ticket what I wanted the engineer to code, otherwise they would come back with "not enough spec, can't proceed"). All of these cases - senior people!

Sure - LLMs will do what they're told (to a specific value of "do" and "what they're told")

sebasvisser an hour ago | parent | prev | next [-]

Maybe see it less as a junior and replacement for humans. See it more as a tool for you! A tool so you can do stuff you used to delegate/dump to a junior, do now yourself.

lupire 26 minutes ago | parent | prev [-]

Claude gets better as Claude's managers explain concepts to it. It doesn't learn the way a human does. AI is not human. The benefit is that when Claude learns something, it doesn't need to run a MOOC to teach the same things to millions of individuals. Every copy of Claude instantly knows.

xtiansimon 2 hours ago | parent | prev | next [-]

> “…exploiting our employer's lack of information…”

I agree in the sense that those of us who work in for-profit businesses have benefited from employer’s willingness to spend on dev budgets (salaries included)—without having to spend their own _time_ becoming increasingly involved in the work. As “AI” develops it will blur the boundaries of roles and reshape how capital can be invested to deliver results and have impact. And if the power dynamics shift (ie. out of the class of educated programmers to, I dunno, philosophy majors) then you’re in trouble.

singpolyma3 an hour ago | parent | prev [-]

If one is a junior the goal is to become a senior though. Not to remain a junior.

solids an hour ago | parent [-]

Yes, but the barrier to become a senior is what’s currently in dispute

JeremyNT 37 minutes ago | parent | prev | next [-]

> And now, we're having a huge explosion of tools for developers, to build software that has to be maintained by developers, made by developers for developers.

What do you think they're building all those datacenters for? Why do you think so much money is pouring into AI companies?

It's not to help make developers more efficient with code assistants.

Traditional computation will be replaced with bots in every aspect of software. The goal is to devalue our labor and replace it with computation performed by machines owned by the wealthy, who can lease this out.

If you can't see this coming you lack both imagination and historical perspective.

Five years ago Claude Code would have been essentially unimaginable. Consider this.

So sure, enjoy your job churning out buggy whips while you can, but you better have a plan B for when the automobiles truly arrive.

allturtles 19 minutes ago | parent | next [-]

I agree with all this, except there is no plan B. What could plan B possibly be when white collar work collapses? You can go into a trade, but who will be hiring the tradespeople?

aishsh 28 minutes ago | parent | prev [-]

I think it’s much more likely they’ll be used for mass surveillance purposes. The tech is already there, they just need the compute (and a lot of it).

Most of the economy is making things that aren’t really needed. Why bother keeping that afloat when it’s 90% trinkets for the proles? Once they’ve got the infra to ensure compliance why bother with all the fake work which is the real opium of the masses.

csmantle 3 hours ago | parent | prev | next [-]

> programmers should be some of the most worry-free individuals on this planet, the job is easy, well-paid, not a lot of health drawbacks if you have a proper setup and relatively easy to find a new job when you need it

Not in where I live though. Competition is fierce, both in industry and academia, for most posts being saturated and most employees face "HR optimization" in their late 30s. Not to mention working over time, and its physical consequences.

embedding-shape 3 hours ago | parent [-]

Again, compare this to other professions, don't look at in isolation, and you'll see why you're still (or will have, seems you're a student still) having a much more pleasant life than others.

tdeck an hour ago | parent | next [-]

This is completely irrelevant. The point is that the profession is being devalued, i.e. losing value relative to where it was. If, for example, the US dollar loses value, it's not a "counterargument" to point out that it's still much more valuable than the Zimbabwe dollar.

ramon156 an hour ago | parent | prev | next [-]

Do other professions expect you to work during personal time? At least blue collar people are done when they get told they're done

I get your viewpoint though, physically exhausting work is probably much worse. I do want to point out that 40 hours has always been above average, and right now its the default

MattRix 3 hours ago | parent | prev [-]

This “compare it to other professions” thing doesn’t really work when those other professions are not the one you actually do. The idea that someone should never be miserable in their job because other more miserable jobs exist is not realistic.

embedding-shape 2 hours ago | parent [-]

It's a useful thing to look at when you feel like all hope is lost and "wow is so difficult being a programmer" strikes, because it'll make you realize how easy you have it compared to non-programmers/nom-tech people.

MattRix an hour ago | parent [-]

Realizing how supposedly “easy” you have it compared to other people is not as encouraging or motivational as you’re implying it is. And how “easy” do you have it if you can’t find a job in your field?

Mashimo 3 hours ago | parent | prev | next [-]

> What exactly is being de-valuated for a profession that seems to be continuously growing

A lot of newly skilled job applicants can't find anything in the job market right now.

DebtDeflation an hour ago | parent | next [-]

Likewise with experienced devs who find themselves out of work due to the neverending mass layoffs.

There's a huge difference between the perspective of someone currently employed versus that of someone in the market for a role, regardless of experience level. The job market of today is nothing like the job market of 3 years ago. More and more people are finding that out every day.

embedding-shape 3 hours ago | parent | prev | next [-]

Based on conversations with peers for the last ~3 years or so, some of retrained to become programmers, this doesn't seem to as absolute as you paint it out to be.

But as mentioned earlier, the situation in the US seems much more dire than elsewhere. People I know who entered the programming profession in South America, Europe and Asia for these last years don't seem to have more troubles than I had when I got started. Yes, it requires work, just like it did before.

DJBunnies 3 hours ago | parent [-]

Nah it's pretty bad, but congrats on being an outlier.

embedding-shape 3 hours ago | parent [-]

Literally the worst job you can find as a programmer today (if you lower you standards and particularly, stay away from cryptocurrency jobs) is 10x better than the non-programmer jobs you can find.

If you don't trust me, give a non-programming job a try for 1 year and then come back and tell me how much more comfy $JOB was :)

RHSeeger an hour ago | parent | next [-]

> Literally the worst job you can find as a programmer today (if you lower you standards and particularly, stay away from cryptocurrency jobs) is 10x better than the non-programmer jobs you can find.

This is a ridiculous statement. I know plenty of people (that are not developers) that make around the same as I do and enjoy their work as much as I do. Yes, software development is a great field to be in, but there's plenty of others that are just as good.

nake89 3 hours ago | parent | prev | next [-]

> give a non-programming job a try for 1 year

I have a mortgage, 3 kids and a wife to support. So no. I don't think I'm going to do that. Also, I like my programming job.

EDIT: Sorry I thought you were saying the opposite. Didn't realize you were the OP of this thread.

kamaal 2 hours ago | parent | prev | next [-]

>>Literally the worst job you can find as a programmer today (if you lower you standards and particularly, stay away from cryptocurrency jobs) is 10x better than the non-programmer jobs you can find.

A lot of non-programmer jobs have a kind of union protection, pension plans and other perks even with health care. That makes a crappy salary and work environment bearable.

There was this VP of HR, in a Indian outsourcing firm, and she something to the effect that Software jobs appear like would pay to the moon, have an employee generate tremendous value for the company and general appeal that only smart people work these jobs. None of this happens with the majority of the people. So after 10-15 years you actually kind of begin to see why some one might want to work a manufacturing job.

Life is long, job guarantee, pensions etc matter far more than 'move fast and break thing' glory as you age.

queenkjuul 2 hours ago | parent | prev [-]

I was a lot happier in previous non-programming jobs, they just were much worse at paying the bills. If i could make my programming salary doing either of my previous jobs, i would go back in a heartbeat. Hell if i could make even 60% of my programming salary doing those jobs I'd go back.

I enjoy the practice of programming well enough but i do not at all love it as a career. I don't hate it by any means either but it's far from my first choice in terms of career.

phkahler 42 minutes ago | parent | prev | next [-]

>> A lot of newly skilled job applicants can't find anything in the job market right now.

That is not unique to programming or tech generally. The overall US job market is kind of shit right now.

raincole 3 hours ago | parent | prev [-]

Because tech corps overhired[0] when the interest rate was low.

Even after the layoffs, most big tech corps still have more employees today than they did in 2020.

The situation is bad, but the lesson to learn here is that a country should handle a pandemic better than "lowering interest rate to near-zero and increasing government spending." It's just kicking and snowballing the problem to the next four years.

[0]: https://www.dw.com/en/could-layoffs-in-tech-jobs-spread-to-r...

IAmBroom an hour ago | parent | next [-]

I think it was more sandbagging than snowballing. The pain was spread out, and mostly delayed, which kept the economy moving despite everything.

Remember that most of the economy is actually hidden from the stock market, its most visible metric. Over half the business is privately-owned small businesses, and at the local level forcibly shutting down all but essential-service shops was devastating. Without government spending, it's hard to imagine how most of those business owners and their employees would have survived, let alone their shops.

Yet we had no bread lines, no (increase in) migratory families chasing cash labor markets, and demands on charity organizations were heavy, but not overwhelming.

But you claim "a country should handle a pandemic better..." - what should we have done instead? Criticism is easy.

Hendrikto an hour ago | parent | prev [-]

It seems like most companies are just using AI as a convenient cover for layoffs. If you say: “We enormously over-hired and have to do layoffs.”, your stock tanks. If you instead say that you are laying off the same 20k employees ‘because AI’, your stock pumps for no reason. It’s just framing.

spicyusername 2 hours ago | parent | prev | next [-]

100% my experience as well.

Negativity spreads so much more quickly than positivity online, and I feel as though too many people live in self reinforcing negative comment sections and blog posts than in the real world, which gives them a distorted view.

My opinion is that LLMs are doing nothing but accelerating what's possible with the craft, not eliminating it. If anything, this makes a single developer MORE valuable, because they can now do more with less.

lopis 4 hours ago | parent | prev | next [-]

The job of a programmer is, and has always been, 50% making our job obsolete (through various forms of automation) and 50% ensuring our job security (through various forms of abstraction).

empath75 25 minutes ago | parent [-]

Over the course of my career, probably 2/3rds of the roles I have had (as in my day to day work, not necessarily the title) just no longer exist, because people like me eliminated them. I personally was the last person that had a few of those jobs because I mostly automated them and got promoted and they didn't hire a replacement. It's not that they hired less people though, they just hired more people, paid them more money, and they focused on more valuable work.

m_a_g 3 hours ago | parent | prev | next [-]

> I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun)

In my Big Tech job, I sometimes forget that some people can really enjoy what they do. It seems like you're in a fortunate position of both high pay and high enjoyment. Congratulations! Out of curiosity, what do you work on?

embedding-shape 3 hours ago | parent [-]

Right now I'm doing consulting for two companies, maybe a couple of hours per week, mostly having downtime and trying to expand on my machine learning knowledge.

But in general, every job I've had has been "high pay and high enjoyment" even when I initially had "shit pay" compared to other programmers, and the product wasn't really fun, I was still programming, an activity I still love.

Compare this to the jobs I did before, where the physical toll makes it impossible to do anything after work as you're exhausted, and even if I got paid more than my first programming job, that your body is literally unable to move once you get home, makes the pay matter less and feel less.

But for a programmer, you can literally sit still all day, have some meetings in a warm office, talk with some people, type some things into a document, sit and think for a while, and in the end of the month you get a paycheck.

If you never worked in another profession, I think you ("The Programmer") don't realize how lucky you are compared to the rest of the world.

matwood 13 minutes ago | parent | next [-]

It's a good perspective to keep. I've also worked a lot of crappy jobs. Overnights in a grocery store (IIRC, they paid an extra .50/hour to work overnights), fine dining waiter (this one was actually fun, but the partying was too much), on a landscaping crew, etc... I make more money than I ever thought possible growing up. My dad still can't believe I have job 'playing on the computer' all day, though I mostly manage now.

IAmBroom an hour ago | parent | prev | next [-]

A useful viewpoint.

I too have worked in shit jobs. I too appreciate that I am currently in a 70F room of my house, wearing a T-shirt and comfy pants, and able to pet my doggos at will.

RHSeeger an hour ago | parent | prev | next [-]

Mental exhaustion is a thing, too.

queenkjuul an hour ago | parent | prev [-]

I work remote and i hate it, sitting all day is killing me, my 5 minute daily stand-up is nowhere near enough social interaction for a whole day's work. I've been looking for a role better suited to me for over a year, but the market is miserable.

I miss having jobs where at least a lot of the time i was moving around or working directly with other people. More than anything else i miss casual conversation with coworkers (which still happened with excruciating rarity even when i was doing most of my programming in an office).

I'm glad you love programming and find the career ideal. I don't mean to harp or whine, just pointing out your ideals aren't universal even amount programmers.

mattbettinson 40 minutes ago | parent [-]

Get a standing desk and a walking treadmill! It’s genuinely changed my life. I can focus easier, I get my steps in, and it feels like I did something that day.

hacb 2 hours ago | parent | prev | next [-]

It's absolutely not easy to find a new job in France, and more generally in Europe

embedding-shape 2 hours ago | parent [-]

My experience and the ones I personally known, been in Western Europe, South America and Asia, and programmers I know have an easier time to find new jobs compared to other professions.

Don't get me wrong, it's a lot harder for new developers to enter the industry compared to a decade ago, even in Western Europe, but it's still way easier compared to the length people I know who aren't programmers or even in tech.

IAmBroom an hour ago | parent [-]

That's a quantifiable claim. Using experience to "prove" it is inappropriate.

US data does back it up, though. The tech labor sector outperformed all others in the last 10 years. https://www.bls.gov/emp/tables/employment-by-major-industry-...

kalaksi 4 hours ago | parent | prev | next [-]

> programmers should be some of the most worry-free individuals on this planet, the job is easy, well-paid, not a lot of health drawbacks...

I don't know what kind of work you do but this depends a lot on what kind of projects you work on

embedding-shape 3 hours ago | parent [-]

Across ~10 jobs or so, mostly as a employee of 5-100 person companies, sometimes as a consultant, sometimes as a freelancer, but always with a comfy paycheck compared to any other career, and never as taxing (mental and physical) as the physical labor I did before I was a programmer, and that some of my peers are still doing.

Of course, there is always exceptions, like programmers who need to hike to volcanos to setup sensors and what not, but generally, programmers have one of the most comfortable jobs on the planet today. If you're a programmer, I think it should come relatively easy to acknowledge this.

SkyeCA an hour ago | parent | next [-]

Comfortable and easy, but satisfying? I don't think so. I've had jobs that were objectively worse that I enjoyed more and that were better for my mental health.

RHSeeger an hour ago | parent | prev | next [-]

> never as taxing (mental and physical) as the physical labor I did before I was a programmer

I find it... very strange that you think software development is less mentally taxing than physical labor.

kalaksi 3 hours ago | parent | prev | next [-]

Sure, it's mostly comfy and well-paid. But like with physical labor, there are jobs/projects that are easy and not as taxing, and jobs that are harder and more taxing (in this case mentally).

embedding-shape 3 hours ago | parent [-]

Yes, you'll end up in situations where peers/bosses/clients aren't the most pleasant, but compare that to any customer facing job, you'll quickly be able to shed those moments as countless people face those seldom situations on a daily basis. You can give it a try, work in a call center for a month, and you'll acquire more stress during that month than even the worst managed software project.

kalaksi 3 hours ago | parent [-]

When I was younger, I worked doing sales and customer service at a mall. Mostly approaching people and trying to pitch a product. Didn't pay well, was very easy to get into and do, but I don't enjoy that kind of work (and many people don't enjoy programming and would actually hate it) and it was temporary anyway. I still feel like that was much easier, but more boring.

etrautmann an hour ago | parent | prev [-]

That sounds ideal! I used to be a field roboticist where we would program and deploy robots to Greenland and Antarctica. IMO the fieldwork helped balance the desk work pretty well and was incredibly enjoyable.

syllogism 35 minutes ago | parent | prev | next [-]

Software to date has been a [Jevons good](https://en.wikipedia.org/wiki/Jevons_paradox). Demand for software has been constrained by the cost efficiency and risk of software projects. Productivity improvements in software engineering have resulted in higher demand for software, not less, because each improvement in productivity unblocks more of the backlog of projects that weren't cost effective before.

There's no law of nature that says this has to continue forever, but it's a trend that's been with us since the birth of the industry. You don't need to look at AI tools or methodoligies or whatever. We have code reuse! Productivity has obviously improved, it's just that there's also an arms race between software products in UI complexity, features, etc.

If you don't keep improving how efficiently you can ship value, your work will indeed be devalued. It could be that the economics shift such that pretty much all programming work gets paid less, it could be that if you're good and diligent you do even better than before. I don't know.

What I do know is that whichever way the economics shake out, it's morally neutral. It sounds like the author of this post leans into a labor theory of value, and if you buy into that, well...You end up with some pretty confused and contradictory ideas. They position software as a "craft" that's valuable in itself. It's nonsense. People have shit to do and things they want. It's up to us to make ourselves useful. This isn't performance art.

conradfr an hour ago | parent | prev | next [-]

I didn't enter this profession because I love reviewing code though.

elif an hour ago | parent [-]

Then use better software engineering paradigms in how your AI builds projects.

I find the more I specify about all the stuff I thought was hilariously pedantic hyper-analysis when I was in school, the less I have to interpret.

If you use test-driven, well-encapsulated object oriented programming in an idiomatic form for your language/framework, all you really end up needing to review is "are these tests really testing everything they should."

roxolotl 3 hours ago | parent | prev | next [-]

I came here to quote the same quote but with the opposite sentiment. If you look at the history of work, at least in the states, it’s a history of almost continual devaluation and automation. I’ve been assuming that my generation, entering the profession in the 2010s, will be the last where it’s a pathway to an upper middle class life. Just like the factory workers before us automation will come for those who do mostly repetitive tasks. Sure there will be well paid professional software devs in the future just as there are some well paid factory workers who mostly maintain machines. But the scale of the opportunity will be much smaller.

embedding-shape 3 hours ago | parent [-]

But in the end, we didn't end up with less factories that do more, we ended up with more factories that does more.

Why wouldn't the same happen here? Instead of these programmers jamming out boilerplate 24/7, why are they unable to improve their skill further and move with the rest of the industry, if that's needed? Just like other professions adopt to how society is shaped, why should programming be an exception to that?

overflow897 44 minutes ago | parent [-]

And how is the quality of life for those factory workers? It's almost like the craft of making physical things has been devalued even if we're making more physical things than ever.

ewzimm 30 minutes ago | parent | prev | next [-]

What is devalued is traditional labor-based ideology. The blog references Marx's theory of alienation. The Marxist labor theory of value, that the value of anything is determined by the labor that creates it, gives the working class moral authority over the owner class. When labor is reduced, the basis of socialist revolution is devalued, as the working class no longer can claim superior contributions to value creation.

If one doesn't subscribe to traditional Marxist ideology, this argument won't land the same way, but elements of these ideas have made their way into popular ideas of value.

monegator an hour ago | parent | prev | next [-]

> What exactly is being de-valuated We are being second guessed by any sub organism with little brain, but opposable thumbs, at a rate much greater than before, because now the sub organism can simply ask the LLM to type their arguments for them. How many times have you received screenshots of an LLM output yesanding whatever bizarre request you already tried to explain and dismiss as not possible/feasible/unnecessary? the sub organism has delegated their thoughts to the LLM and i always find that extremely infuriating, because all i want to do is to shake that organism and cry "why don't you get it? think! THINK! THINK FOR YOURSELF FOR JUST A SECOND"

Also, i enjoy programming. Even typing boring shit as boilerplate because i keep my brain engaged. As much as i type i keep thinking, is this really necessary? and maybe figure out something leaner. LLMs want to deprive me of enjoyment of my work (research, learn) and of my brain. No thanks, no LLM for me. And i don't care whatever garbage it outputs, i'd much prefere if the garbage was your output, or you are useless.

The only use i have for LLMs and diffusion models is to entertain myself with stupid bullshit i come up with that i would find funny. I massively enjoy projects such as https://dumbassideas.com/

Note: Not taking into account the "classic" ML uses, my rant only going to LLMs and the LLM craze. A tool made by grifters, for grifters.

i_love_retros an hour ago | parent | prev | next [-]

> I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun), and programmers should be some of the most worry-free individuals on this planet, the job is easy

Eh?

I'm happy for you (and envious), because that is not my experience. The job is hard. Agile's constant fortnightly deadlines, a complete lack of respect by the rest of the stakeholders for the work developers do (even more so now because "ai can do that"), changing requirements but an expectation to welcome changing requirements because that is agile, incredibly egotistical assholes that seem to gravitate to engineering manager roles, and a job market that's been dead for a few years now.

No doubt some will comment and say that if I think my job is hard I should compare it to a coal miner in the 1940's. True, but as Neil Young sang: "Though my problems are meaningless, that don't make them go away."

Xenoamorphous 2 hours ago | parent | prev | next [-]

> the US seems to struggle with that specific point as of late, yet it remains true in the rest of the world

Are you sure about that?

embedding-shape 2 hours ago | parent [-]

No, I'm just pulling anecdotes out of my ass/am hallucinating.

Is there something specific you'd like to point me to, besides just replying with a soundbite?

RHSeeger an hour ago | parent | next [-]

Admittedly, there's the responses in this thread with people saying "I'm in <some country that isn't the US> and the market here is bad, too".

IAmBroom an hour ago | parent | prev [-]

Data.

tester756 an hour ago | parent | prev | next [-]

>the job is easy

software engineering is easy? you live in bubble, try teaching programming to someone new to it and you'll realize how muuuuch effort it requires

ulfw 4 hours ago | parent | prev | next [-]

> What are they talking about? What is this "devaluation"? I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun)

You do realise your position of luck is not normal, right? This is not how your average Techie 2025 is.

lopis 4 hours ago | parent | next [-]

Specially for new developers. Entry level jobs have practically evaporated.

ishouldbework 3 hours ago | parent | prev | next [-]

Well, speaking just for central Europe, it is pretty average. Sure, entry-level positions are different story, but anyone with at least few years for work experience can find reasonably payed job fairly quickly.

IAmBroom an hour ago | parent [-]

Others in Europe in this thread contradict your belief.

Actual data is convincing; few are providing it.

embedding-shape 3 hours ago | parent | prev | next [-]

I don't know what "position of luck" you're talking about, it's been dedicated effort to practice programming and suffer through a lot of shit until I got my first comfy programming job.

And even if I'm experienced now, I still have peers and acquaintances who are getting into the industry, I'm not sitting in my office with my eyes closed exactly.

Aeolun 3 hours ago | parent | prev | next [-]

That’s probably because the definition of ‘average techie’ has been on a rapid downward trajectory for years? You can justify the waste when money is free. Not when you need them to do something.

Glemkloksdjf 3 hours ago | parent | prev [-]

Every good 'techie' around me has it good.

Mountain_Skies an hour ago | parent | prev | next [-]

Your complete lack of empathy is going to be your undoing. Might want to check in on that.

amrocha 4 hours ago | parent | prev | next [-]

There’s been over 1 million people laid off in tech in the past 4 years

https://www.trueup.io/layoffs

svantana an hour ago | parent | next [-]

According to that site, there were more tech layoffs in 2022 than in 2024 or 2025. Doesn't that speak against the "AI is taking tech jobs" hypothesis?

empath75 23 minutes ago | parent | prev | next [-]

I have been laid off 4 times. Tech has a lot of churn, there are a lot of high risk high reward companies.

embedding-shape 3 hours ago | parent | prev | next [-]

Again, sucks to be in the US as a programmer today maybe, but this isn't true elsewhere in the world, and especially not if you already have at least some experience.

lm28469 3 hours ago | parent | next [-]

Definitely true in western Europe, and finding a job is extremely hard for the vast majority of non expert devs.

Of course if you're in south eastern europe or in south asia where all the jobs are being offshored you're having the time of your life.

embedding-shape 3 hours ago | parent [-]

> Definitely true in western Europe, and finding a job is extremely hard for the vast majority of non expert devs.

I don't know what else to say except that hasn't been my experience personally, nor the experience of my acquaintances who've re-skilled to become programmers these last few years, in Western Europe.

lm28469 3 hours ago | parent | next [-]

Anecdotes are cool but we came up with a neat little thing known as statistics.

https://finance.yahoo.com/news/tech-job-postings-fall-across...

> Among the 27 countries analysed, European nations saw the steepest fall in tech job postings between 1 February 2020 and 31 October 2025,

> In absolute terms, the decline exceeded 40% in Switzerland (-46%) and the UK (-41%), with France (-39%) close behind.

> The United States showed a similar trend, with a decline of 35%. Austria (-34%), Sweden (-32%) and Germany (-30%) were also at comparable levels.

amrocha 3 hours ago | parent | prev [-]

Do you base your entire worldview purely on your own personal experience?

embedding-shape 3 hours ago | parent [-]

Do you suffer from reading comprehension issues?

amrocha 2 hours ago | parent [-]

It’s ok to admit that you were wrong. Your experience is good, but the industry is doing very poorly right now. I showed you data to back that up. Someone else posted data about Europe.

Don’t close your eyes and plug your ears and pretend you didn’t hear anything.

MattRix 3 hours ago | parent | prev [-]

You seem to keep having to add more and more qualifiers to your statements…

nosianu an hour ago | parent [-]

I only see one. "Outside the US" was the starting proposition, then they only added "experienced".

spicyusername 2 hours ago | parent | prev [-]

This has more to do with monetary policy than AI, though.

mschuster91 3 hours ago | parent | prev | next [-]

> What are they talking about? What is this "devaluation"?

I'm not paid enough to clean up shit after an AI. Behind an intern or junior? Sure, I enjoy that because I can tell them how shit works, where they went off the rails, and I can be sure they will not repeat that mistake and be better programmers afterwards.

But an AI? Oh good luck with that and good luck dealing with the "updates" that get forced upon you. Fuck all of that, I'm out.

flir 3 hours ago | parent [-]

> I'm not paid enough to clean up shit after an AI.

I enjoy making things work better. I'm lucky in that, because there's always been more brownfield work than greenfield work. I think of it as being an editor, not an author.

Hacking into vibe code with a machete is kinda fun.

queenkjuul 2 hours ago | parent | prev | next [-]

You clearly haven't tried looking for a job in the last two years have you

GlacierFox 4 hours ago | parent | prev | next [-]

Are we living on the same planet?

embedding-shape 3 hours ago | parent | next [-]

Considering we surely have wildly different experiences and contexts, you could almost say we live on the same planet, but it looks very different to each and one of us :)

belter 3 hours ago | parent | prev [-]

No... :-)

zombot 3 hours ago | parent | prev [-]

I'd love to live on the same planet you do.

embedding-shape 3 hours ago | parent [-]

Gained life experience is always possible, regardless of your age :) Give other professions a try, and see the difference for yourself.

markbnj 40 minutes ago | parent | prev | next [-]

I think the dangers that LLMs pose to the ability of engineers to earn a living is overstated, while at the same time the superpowers that they hand us don't seem to get much discussion. When I was starting out in the 80's I had to prowl dial-up BBSs or order expensive books and manuals to find out how to do something. I once paid IBM $140 for a manual on the VGA interface so I could answer a question. The turn around time on that answer was a week or two. The other day I asked claude something similar to this: "when using github as an OIDC provider for authentication and assumption of an AWS IAM role the JWT token presented during role assumption may have a "context" field. Please list the possible values of this field and the repository events associated with them." I got back a multi-page answer complete with examples.

I'm sure github has documents out there somewhere that explain this, but typing that prompt took me two minutes. I'm able daily to get fast answers to complex questions that in years past would have taken me potentially hours of research. Most of the time these answers are correct, and when they are wrong it still takes less time to generate the correct answer than all that research would have taken before. So I guess my advice is: if you're starting out in this business worry less about LLMs replacing you and more about how to efficiently use that global expert on everything that is sitting on your shoulder. And also realize that code, and the ability to write working code, is a small part of what we do every day.

hvb2 a minute ago | parent [-]

> So I guess my advice is: if you're starting out in this business worry less about LLMs replacing you and more about how to efficiently use that global expert on everything that is sitting on your shoulder.

There's a real danger in that they use so many resources though. Both in the physical world (electricity, raw materials, water etc.) as well as in a financial sense.

All the money spent on AI will not go to your other promising idea. There's a real opportunity cost there. I can't imagine that, at this point, good ideas go without funding because they're not AI.

shlip 4 hours ago | parent | prev | next [-]

> AI systems exist to reinforce and strengthen existing structures of power and violence.

Exactly. You can see that with the proliferation of chickenized reverse centaurs[1] in all kinds of jobs. Getting rid of the free-willed human in the loop is the aim now that bosses/stakeholders have seen the light.

[1] https://pluralistic.net/2022/04/17/revenge-of-the-chickenize...

Glemkloksdjf 3 hours ago | parent | next [-]

If you are a software engineere, you can leverage AI a lot better to write code than anyone else.

The complexity of good code, is still complicated.

which means 1. if software development is really solved, everyone else also gets a huge problem (ceo, cto, accountants, designers, etc. etc.) so we are in the back of the ai doomsday line.

And 2. it allows YOU to leverage AI a lot better which can enable you to create your own product.

In my startup, we leverage AI and we are not worried that another company just does the same thing because even if they do, we know how to write good code and architecture and we are also using AI. So we will always be ahead.

countWSS 4 hours ago | parent | prev | next [-]

Sounds like Manna control system: https://marshallbrain.com/manna

flir 4 hours ago | parent | prev | next [-]

Now apply that thinking to computers. Or levers.

I've seen the argument that computers let us prop up and even scale governmental systems that would have long since collapsed under their own weight if they’d remained manual more than once. I'm not sure I buy it, but computation undoubtedly shapes society.

The author does seem quite keen on computers, but they've been "getting rid of the free-willed human in the loop" for decades. I think there might be some unexamined bias here.

I'm not even saying the core argument's wrong, exactly - clearly, tools build systems ("...and systems kill" - Crass). I guess I'm saying tools are value neutral. Guns don't kill people. So this argument against LLMs is an argument against all tools, unless you can explain how LLMs are a unique category of tool?

(Aside: calling out the lever sounds silly, but I think it's actually a great example. You can't do monumental architecture without levers, and the point in history where we start doing that is also the point where serious surplus extraction kicks in. I don't think that's coincidence).

prmph 3 hours ago | parent | next [-]

Tools are not value neutral in any way.

In my third world country, motorbikes, scooters, etc have exploded in popularity and use in the past decade. Many people riding these things have made the roads much more dangerous for all, but particularly for them. They keep dying by the hundreds per month, not only just due to the fact that they choose to ride them at all, but how they ride them: on busy high speed highways, weaving between lanes all the time, swerving in front of speeding cars, with barely any protective equipment whatsoever. A car crash is frequently very survivable; motorcycle crash, not so much. Even if you survive the initial collision, the probability of another vehicle running you over is very high on a busy highway.

On would think, given the clear evidence for how dangerous these things are, why do people (1) ride them at all on the highway, and (2) in such a dangerous manner? One might excuse (1) by recognizing that many are poor and can't buy a car, and the motorbikes represent economic possibility: for use in courier business, of being able to work much further from home, etc.

But here is the thing about (2), A motorbike wants to be ridden that way. No matter how well the rider recognizes the danger, there is only so much time can pass before the sheer expediency of riding that way overrides any sense of due caution. Where it would be safer to stop or keep to a fixed lane without any sudden movements, the rider thinks of the inconvenience of stopping, does a quick mental comparison it to the (in their minds) the minuscule additional risk, and carries on. Stopping or keeping to a proper lane in a car require far less discipline than doing that on a motorbike.

So this is what people mean when they say tech is not value neutral. The tech can theoretically be used in many ways. But some forms of use are so aligned with the form of the tech that in practice it shapes behavior.

flir 3 hours ago | parent [-]

> A motorbike wants to be ridden that way

That's a lovely example. But is the dangerous thing the bike, or the infrastructure, or the system that means you're late for work?

I completely get what you're saying. I was thinking of tools in the narrowest possible way - of the tool in isolation (I could use this gun as a doorstop). You're thinking of the tool's interface with its environment (in the real world nobody uses guns as doorstops). I can't deny that's the more useful way to think about tools ("computation undoubtedly shapes society").

idle_zealot 3 hours ago | parent | prev | next [-]

> The author does seem quite keen on computers, but they've been "getting rid of the free-willed human in the loop" for decades. I think there might be some unexamined bias here.

Certainly it's biased. I'm not the author, but to me there's a huge difference between computer/software as a tool, designed and planned, with known deterministic behavior/functionality, then put in the hands of humans, vs automating agency. The former I see as a pretty straightforward expansion of humanity's long-standing relationship with tools, from simple sticks to hand axes to chainsaws. The sort of automation AI-hype seems focused on doesn't have a great parallel in history. We're talking about building a statistical system to replace the human wielding the tool, mostly so that companies don't have to worry about hiring employees. Even if the machine does a terrible job and most of humanity, former workers and current users, all suffer, the bet is that it will be worth the cost savings.

ML is very cool technology, and clearly one of the major frontiers of human progress. At this stage though, I wish the effort on the packaging side was being spent on wrapping the technology in the form of reliable capabilities for humans to call on. Stuff like OCR at the OS level or "separate tracks" buttons in audio editors. The market has decided instead that the majority of our collective effort should go towards automated liability-sinks and replacing jobs with automation that doesn't work reliably.

And the end state doesn't even make sense. If all this capital investment does achieve breakthroughs and creat true AGI, do investors really think they'll see returns? They'll have destroyed the entire concept of an economy. The only way to leverage power at that point would be to try to exercise control over a robot army or something similarly sci-fi and ridiculous.

amrocha 3 hours ago | parent | prev [-]

It’s a good thing that there’s centuries of philosophy on that subject and the general consensus is that no, tools are not “neutral” and do shape the systems they interact with, sometimes against the will of those wielding these tools.

See the nuclear bomb for an example.

flir 3 hours ago | parent [-]

I'm actually thinking of Marshall McLuhan. Maybe you're right, and tools aren't neutral. Does this mean that computation necessitates inequality? That's an uncomfortable conclusion for people who identify as hackers.

Aeolun 3 hours ago | parent | prev | next [-]

How is that different from making manual computation obsolete with the help of excel?

fennecfoxy 3 hours ago | parent | prev | next [-]

Lmao Cory Doctorow. Desperately trying to coin another catchphrase again.

lynx97 4 hours ago | parent | prev [-]

I am surprised (and also kind of not) to see this kind of tech hate on HN of all places.

Would you prefer we heat our homes by burning wood, carry water from the nearby spring, and ride horses to visit relatives?

Progress is progress, and has always changed things. Its funny that apparently, "progressive" left-leaning people are actually so conservative at the core.

So far, in my book, the advancements in the last 100 or even more years have mostly always brought us things I wouldn't want to miss these days. But maybe some people would be happier to go back to the dark ages...

seu 3 hours ago | parent | next [-]

> Progress is progress, and has always changed things. Its funny that apparently, "progressive" left-leaning people are actually so conservative at the core.

I am surprised (and also kind of not) to see this lack of critical reflection on HN of all places.

Saying "progress is progress" serves nobody, except those who drive "progress" in directions that benefits them. All you do by saying "has always changed things" is taking "change" at face value, assuming it's something completely out of your control, and to be accepted without any questioning it's source, it's ways or its effects.

> So far, in my book, the advancements in the last 100 or even more years have mostly always brought us things I wouldn't want to miss these days. But maybe some people would be happier to go back to the dark ages...

Amazing depiction of extremes as the only possible outcomes. Either take everything that is thrown at us, or go back into a supposed "dark age" (which, BTW, is nowadays understood to not have been that "dark" at all) . This, again, doesn't help have a proper discussion about the effects of technology and how it comes to be the way it is.

Glemkloksdjf 3 hours ago | parent [-]

Dark age was dark. Human rights, female! rights, hunger, thirst, no progress at all, hard lifes.

So are you able, realisticly, to stop progress around a whole planet? Tbh. getting an alignment across the planet to slow down or stop AI would be the equivilent of stoping capitalism and actually building a holistic planet for us.

I think ai will force the hand of capitalism but i don't think we will be able to create a star trek universe without getting forced

trashb 33 minutes ago | parent [-]

> Dark age was dark. Human rights, female! rights, hunger, thirst, no progress at all, hard lifes.

There was progress in the Middle Ages, hence the difference between the early and late Middle Ages. Most information was mouth to mouth instead of written down.

"The term employs traditional light-versus-darkness imagery to contrast the era's supposed darkness (ignorance and error) with earlier and later periods of light (knowledge and understanding)."

"Others, however, have used the term to denote the relative scarcity of written records regarding at least the early part of the Middle Ages"

https://en.wikipedia.org/wiki/Dark_Ages_(historiography)

lm28469 3 hours ago | parent | prev | next [-]

> Would you prefer we heat our homes by burning wood, carry water from the nearby spring, and ride horses to visit relatives?

I'm more surprised that seemingly educated people have such simplistic views as "technology = progress, progress = good hence technology = good". Vaccines and running water are tech, megacorps owned "AI" being weaponised by surveillance obsessed governments is also tech.

If you don't push back on "tech" you're just blindingly accepting whatever someone else decided for you. Keep in mind the benefits of tech since the 80s have mostly been pocketed by the top 10%, the pleb still work as much, retire as old, &c. despite what politicians and technophiles have been saying

andrepd 3 hours ago | parent | prev [-]

"You don't like $instance_of_X? You must want to get rid of all $X" has got to be one of the most intellectually lazy things you could say.

You don't like leaded gasoline? You must want us to walk everywhere. Come on...

lynx97 3 hours ago | parent [-]

A tool is a tool. These AI critics sound to me like people who have hit their finger with a hammer, and now advocate against using them altogether. Yes, tech has always had two sides. Our "job" as humans is to pick the good parts, and avoid the bad. Nothing new, nothing exceptional.

lm28469 3 hours ago | parent | next [-]

> A tool is a tool. These AI critics sound to me like people who have hit their finger with a hammer, and now advocate against using them altogether.

Speaking of wonky analogies, have you considered that other people have access to these hammers and are aiming for your head ? And that some people might not want to be hit on the head by a hammer

andrepd 2 hours ago | parent | prev [-]

More lazy analogies... Yes a hammer is a tool, so is a machine gun, a nuke, or the guy with his killdozer. So what are you gonna do? Nothing to see here, discussion closed.

This is not an interesting conversation.

lxgr an hour ago | parent | prev | next [-]

> I find it particularly disillusioning to realize how deep the LLM brainworm is able to eat itself even into progressive hacker circles.

Anything worth reading beyond this transparent and hopefully unsuccessful appeal to tribalism?

Hackers have always tried out new technologies to see how they work – or break – so why would LLMs be any different?

> the devaluation of our craft, in a way and rate we never anticipated possible. A fate that designers, writers, translators, tailors or book-binders lived through before us

What is it with this perceived right to fulfilling, but also highly paid, employment in software engineering?

Nobody is stopping anyone from doing things by hand that machines can do at 10 times the quality and 100 times the speed.

Some people will even pay for it, but not many. Much will be relegated to unpaid pastime activities, and the associated craftspeople will move on to other activities to pay the bills (unless we achieve post-scarcity first). That's just human progress in a nutshell.

If the underlying problem is that many societies define a person's worth via their employability, that seems like a problem best fixed by restructuring said societies, not by artificially blocking technological progress. "progressive hackers"...

ceejayoz 34 minutes ago | parent [-]

> Hackers have always tried out new technologies to see how they work – or break – so why would LLMs be any different?

Who says we haven't tried it out?

lxgr 32 minutes ago | parent [-]

Seems like various hackers came to various different conclusions from trying them out, then. Why feign surprise about this?

ceejayoz 5 minutes ago | parent [-]

Why not?

I was surprised how hard many here fell for the NFT thing, too.

abbadadda 4 hours ago | parent | prev | next [-]

I really enjoyed how your words made me _feel._ They encouraged me to "keep fighting the good fight" when it comes to avoiding social media, et. al.

I do Vibe Code occasionally, Claude did a decent job with Terraform and SaltStack recently, but the words ring true in my head about how AI weakens my thinking, especially when it comes to Python or any programming language. Tread carefully indeed. And reading a book does help - I've been tearing through the Dune books after putting them off too long at my brother's recommendation. Very interesting reflections in those books on power/human nature that may apply in some ways to our current predicament.

At any rate, thank you for the thoughtful & eloquent words of caution.

scld 17 minutes ago | parent [-]

Doesn't Python weaken your thinking about how computers actually work?

rho4 4 hours ago | parent | prev | next [-]

And then there is the moderate position: Don't be the person refusing the use a calculator / PC / mobile phone / AI. Regularly give the new tool a chance and check if improvements are useful for specific tasks. And carry on with your life.

rsynnott 2 hours ago | parent | next [-]

Don't be the person refusing the 4GL/Segway/3D TV/NFT/Metaverse. Regularly give the new tool a chance and check if improvements are useful for specific tasks.

Like, I mean, at a certain point it runs out of chances. If someone can show me compelling quantitive evidence that these things are broadly useful I may reconsider, but until then I see no particular reason to do my own sampling. If and when they are useful, there will be _evidence_ of that.

(In fairness Segways seem to have a weird afterlife in certain cities helping to make tourists more annoying; there are sometimes niche uses for even the most pointless tech fads.)

cons0le an hour ago | parent | next [-]

I detest LLMs , but I want to point out that segway tech became the basis for EUCs , which are based https://youtu.be/Ze6HRKt3bCA?t=1117

These things are wicked, and unlike some new garbage javascript framework, it's revolutionary technology that regular people can actually use and benefit from. The mobility they provide is insane.

https://old.reddit.com/r/ElectricUnicycle/comments/1ddd9c1/i...

catapart 27 minutes ago | parent [-]

lol! I thought this was going to link to some kind of innovative mobility scooter or something. I was still going to say "oh, good; when someone uses the good parts of AI to build something different which is actually useful, I'll be all ears!", because that's all you would really have been advocating for if that was your example.

But - even funnier - the thing is an urbanist tech-bro toy? My days of diminishing the segway's value are certainly coming to a middle.

Spivak an hour ago | parent | prev [-]

I mean sure but none of these even claimed to help you do things you were already doing. If your job is writing code none of these help you do that.

That being said the metaverse happened but it just wasn't the metaverse those weird cringy tech libertarians wanted it to be. Online spaces where people hang out are bigger than ever. Segways also happened they just changed form to electric scooters.

catapart 23 minutes ago | parent [-]

Being honest, I don't know what a 4GL is. But the rest of them absolutely DID claim to help me do things I was already doing. And, actually, NFTs and the Metaverse even specifically claimed to be able to help with coding in various different flavors. It was mostly superficial bullshit, but... that's kind of the whole tech for those two things.

In any case, Segways promised to be a revolution to how people travel - something I was already doing and something that the marketing was predicated on. 3DTVs - a "better" way to watch TV, which I had already been doing. NFTs - (among other things) a financially superior way to bank, which I had already been doing. Metaverse - a more meaningful way to interact with my team on the internet, which I had already been doing.

empath75 19 minutes ago | parent | prev | next [-]

The biggest change in my career was when I got promoted to be a linux sysadmin at a large tech company that was moving to AWS. It was my first sysadmin job and I barely knew what I was doing, but I knew some bash and python. I had a chance to learn how to manage stuff in data centers by logging into servers with ssh and running perl scripts, or I could learn cloudformation because that was what management wanted. Everybody else on my team thought AWS was a fad and refused to touch it, unless absolutely forced to. I wrote a ton of terrible cloudformation and chef cookbooks and got promoted twice times and my salary went from $50,000 a year to $150,000 a year in 3 years after I took a job elsewhere. AFAIK, most of the people on that team got laid off when that whole team was eliminated a few years after I left.

skydhash 3 hours ago | parent | prev | next [-]

If a calculator gives me 5 when I do 2+2, I throw it away.

If a PC crashes when I uses more than 20% of its soldered memory, i throw it away.

If a mobile phone refuses to connect to a cellular tower, I get another one.

What I want from my tools is reliability. Which is a spectrum, but LLMs are very much on the lower end.

crazygringo an hour ago | parent | next [-]

Honestly, LLMs are about as reliable as the rest of my tools are.

Just yesterday, AirDrop wouldn't work until I restarted my Mac. Google Drive wouldn't sync properly until I restarted it. And a bug in Screen Sharing file transfer used up 20 GB of RAM to transfer a 40 GB file, which used swap space so my hard drive ran out of space.

My regular software breaks constantly. All the time. It's a rare day where everything works as it should.

LLMs have certainly gotten to the point where they seem about as reliable as the rest of the tools I use. I've never seen it say 2+2=5. I'm not going to use it for complicated arithmetic, but that's not what it's for. I'm also not going to ask my calculator to write code for me.

tokioyoyo 3 hours ago | parent | prev | next [-]

You can have this position, but the reality is that the industry is accepting it and moving forward. Whether you’ll embrace some of it and utilize it to improve your workflow, is up to you. But over-exaggerating the problem to this point is kinda funny.

candiddevmike an hour ago | parent | prev | next [-]

Sorry you're being downvoted even though you're 100% correct. There are use cases where the poor LLM reliability is as good or better than the alternatives (like search/summarization), but arguing over whether LLMs are reliable is silly. And if you need reliability (or even consistency, maybe) for your use case, LLMs are not the right tool.

fennecfoxy 3 hours ago | parent | prev | next [-]

Except it's more a case of "my phone won't teleport me to Hawaii sad faec lemme throw it out" than anything else.

There are plenty of people manufacturing their expectations around the capabilities of LLMs inside their heads for some reason. Sure there's marketing; but for individuals susceptible to marketing without engaging some neurons and fact checking, there's already not much hope.

Imagine refusing to drive a car in the 60s because they haven't reach 1kbhp yet. Ahaha.

skydhash 2 hours ago | parent [-]

> Imagine refusing to drive a car in the 60s because they haven't reach 1kbhp yet. Ahaha.

That’s very much a false analogy. In the 60s, cars were very reliable (not as much as today’s cars) but it was already an established transportation vehicle. 60s cars are much closer to todays cars than 2000s computers are to current ones.

embedding-shape 3 hours ago | parent | prev [-]

> What I want from my tools is reliability. Which is a spectrum, but LLMs are very much on the lower end.

"reliability" can mean multiple things though. LLM invocations are as reliable (granted you know how program properly) as any other software invocation, if you're seeing crashes you're doing something wrong.

But what you're really talking about is "correctness" I think, in the actual text that's been responded with. And if you're expecting/waiting for that to be 100% "accurate" every time, then yeah, that's not a use case for LLMs, and I don't think anyone is arguing for jamming LLMs in there even today.

Where the LLMs are useful, is where there is no 100% "right or wrong" answer, think summarization, categorization, tagging and so on.

skydhash 2 hours ago | parent [-]

I’m not a native English speaker so I checked on the definition of reliability

  the quality of being able to be trusted or believed because of working or behaving well
For a tool, I expect “well” to mean that it does what it’s supposed to do. My linter are reliable when it catches bad patterns I wanted it to catch. My editor is reliable when I can edit code with it and the commands do what they’re supposed to do.

So for generating text, LLMs are very reliable. And they do a decent job at categorizing too. But code is formal language, which means correctness is the end result. A program may be valid and incorrect at the same time.

It’s very easy to write valid code. You only need the grammar of the language. Writing correct code is another matter and the only one that is relevant. No one hire people for knowing a language grammar and verifying syntax. They hire people to produce correct code (and because few businesses actually want to formally verify it, they hire people that can write code with a minimal amount of bugs and able to eliminate those bugs when they surface).

cpburns2009 an hour ago | parent [-]

I'm a native English speaker. Your understanding and usage of the word "reliability" is correct, and that's the exact word I'd use in this conversation. The GP is playing a pointless semantics game.

RicoElectrico 3 hours ago | parent | prev | next [-]

You're preaching to the wrong crowd I guess. Many people here think in extremes.

0xEF 3 hours ago | parent | prev [-]

I was once in your camp, thinking there was some sort of middle-ground to be had with the emergence of Generative AI and it's potential as a useful tool to help me do more work in less time, but I suppose the folks who opposed automated industrial machinery back in the day did the same.

The problem is that, historically speaking, you have two choices;

1. Resist as long as you can, risking being labeled a Luddite or whatever.

2. Acquiesce.

Choice 1 is fraught with difficulty, like a dinosaur struggling to breathe as an asteroid came and changed the atmosphere it had developed lungs to use. Choice 2 is a relinquishment of agency, handing over control of the future to the ones pulling the levers on the machine. I suppose there is a rare Choice 3 that only the elite few are able to pick, which is to accelerate the change.

My increased cynicism about technology was not something that I started out with. Growing up as a teen in the late-80's/early-90's, computers were hotly debated as being either a fad that would die out in a few years or something that was going to revolutionize the way we worked and give us more free time to enjoy life. That never happened, obviously. Sure, we get more work done in less time, but most of us still work until we are too broken to continue and we didn't really gain anything by acquiescing. We could have lived just fine without smartphones or laptops (we did, I remember) and all the invasive things that brought with it such as surveillance, brain-hacking advertising and dopamine burnout. The massive structures that came out of all the money and genius that went into our tech became megacorporations that people like William Gibson and others warned us of, exerting a level of control over us that turned us all into batteries for their toys, discarded and replaced as we are used up. It's a little frightening to me, knowing how hyperbolic that used to sound 30 years ago, and yet, here we stand.

Generative AI threatens so much more than just altering the way we work, though. In some cases, its use in tasks might even be welcomed. I've played with Claude Code, every generative model that Poe.com has access to, DeepSeek, ChatGPT, etc...they're all quite fascinating, especially when viewed as I view them; a dark mirror reflecting our own vastly misunderstood minds back to us. But it's a weird place to be in when you start seeing them replace musicians, artists, writers...all things that humanity has developed over many thousands of years as forms of existential expression, individuality, and humanness because there is no question that we feel quite alone in our experience of consciousness. Perhaps that is why we are trying to build a companion.

To me, the dangers are far too clear and present to take any sort of moderate position, which is why I decided to stop participating in its proliferation. We risk losing something that makes us us by handing off our creativity and thinking to this thing that has no cognizance or comprehension of its own existence. We are not ready for AI, and AI is not ready for us, but as the Accelerationists and Broligarchs continue to inject it into literally every bit of tech they can, we have to make a choice; resist or capitulate.

At my age, I'm a bit tired of capitulating, because it seems every time we hand the reigns over to someone who says they know what they are doing, they fuck it up royally for the rest of us.

zajio1am an hour ago | parent | prev | next [-]

> We programmers are currently living through the devaluation of our craft.

Valuation is fundamentally connected to scarcity. 'Devaluation' is just negative spin for making it plentyful.

When cicumstances changed to make something less scarce, one cannot expect to get the same value for it because of past valuation. That is just rent-seeking.

ErroneousBosh 2 hours ago | parent | prev | next [-]

I recently had to write a simple web app to search through a database, but full-text searching wasn't quite cutting it. The underlying data was too inconsistent and the kind of things people would ask for would mean searching across five or six columns.

Just the job for an AI agent!

So what I did is this - I wrote the app in Django, because it's what I'm familiar with.

Then in the view for the search page, I picked apart the search terms. If they start with "01" it's an old phone number so look in that column, if they start with "03" it's a new phone number so look in that column, if they start with "07" it's a mobile, if it's a letter followed by two digits it's a site code, if it's numeric but doesn't have a 0 at the start it's an internal number, and if it doesn't match anything then see if it exists as a substring in the description column.

There we go. Very fast and natural searching that Does What You Mean (mostly).

No Artificial Intelligence.

All done with Organic Home-grown Brute Force and Ignorance.

Because that's sometimes just what you need.

disambiguation 25 minutes ago | parent | prev | next [-]

I'm under the impression that AI is still negative ROI. Creating absolute value is different from creating value greater than the cost. A tool is a tool, but could you continue performing professionally if it was suddenly no longer available?

ojr an hour ago | parent | prev | next [-]

I always thought years of experience in a language was a silly job requirement. LLMs allow me to write Rust code as a total Rust beginner and allows me to create a valuable SaaS while most experienced Rust developer never built anything that made $1 outside of their work. I wouldn't say devaluation, my programming experience definitely helps with debugging. LLMs eliminate boilerplate, not engineering judgement and product decisions.

drudolph914 20 minutes ago | parent | next [-]

I think when the author says

> “We programmers are currently living through the devaluation of our craft”

my interpretation of what the author means by devaluation is the general trend that we’re seeing in LLMs

The theory that I hear from investors is as LLMs generally improve, there will exist a day where a LLMs default code output, coupled with continued hardware speeds, will become _good enough_ for the majority of companies - even if the code looks like crap and is 100x slower than it needs to be

This doesn’t mean there won’t be a few companies that still need SWEs to drop down and do engineering, but tbh, the majority of companies today just need a basic web app - and we’ve commoditized web app dev tools to oblivion. I’d even go as far to argue that what most programmers do today isn’t engineering, it’s gluing together an ecosystem of tooling and or API’s.

Real engineering seems to happen outside of work on open source projects, at the mav 7 on specialized teams, or at niche deeply technical startups

EDIT: I’m not saying this is good or bad, but I’m just making the observation that there is a trend towards devaluing this work in the economy for the majority of people, and I generally empathize with people who just want stability and to raise a family within reasonable means

mountainriver 44 minutes ago | parent | prev [-]

I really love LLMs for Rust. Before them I was an intermediate Rust dev, and only used it in specific circumstances where the extra coding overhead paid off.

Now I write just about everything in Rust because why not? If I can vibe code Rust about as fast as Python, why would I ever use Python outside of ML?

0xFEE1DEAD an hour ago | parent | prev | next [-]

I honestly don't get vibe coding.

I've tried it multiple times, but even after spending 4 hours on a fresh project I don't feel like I know what the hell is going on anymore.

At that point I'm just guessing what the next prompt is to make it work. I have no critical knowledge about the codebase that makes me feel like I could fix an edge case without reading the source code line by line (which at that point would probably take longer than 4 hours).

I don't understand how anyone can work like that and have confidence in their code.

gavmor an hour ago | parent [-]

> I have no critical knowledge about the codebase that makes me feel like I could fix an edge case without reading the source code line by line (which at that point would probably take longer than 4 hours).

Peter Naur argues that programming is fundamentally an activity of theory building, not just program text production. The code itself is merely the artifact of the real work.

You must not confuse the artifact (the source code) with the mind that produced the artifact. The theory is not contained in the text output of the theory-making process.

The problems of program modification arise from acting on the assumption that programming is just text production; the decay of a program is a result of modifications made by programmers without a proper grasp of the underlying theory. LLMs cannot obtain Naur's Ryleian "theory" because they "ingest the output of work" rather than developing the theory by doing the work.

LLMs may _appear_ to have a theory about a program, but this is an illusion.

To believe that LLMs can write software, one must mistakenly assume that the main activity of the programmer is simply to produce source code, which is (according to Naur) inaccurate.

64718283661 19 minutes ago | parent | prev | next [-]

The answer is I will kill myself when I become replaced by LLMs entirely.

kleiba 34 minutes ago | parent | prev | next [-]

Damn, I knew I shouldn't have read on after "falafel sandwich"...

hartator 4 hours ago | parent | prev | next [-]

The main thing is everyone seems to hate reading someone else ChatGPT while we are still eager to share ours to others as it’s some sort of oracle.

nachox999 an hour ago | parent | prev | next [-]

It's up to you to use candles instead of lightbulbs

nullbyte808 4 hours ago | parent | prev | next [-]

As a crappy programmer I love AI! Right now I'm focusing on building up my Math knowledge, general CS knowledge and ML knowledge. In the future, knowing how to read code and understanding it may be more important than writing it.

I think its amazing what giant vector matrices can do with a little code.

skydhash 3 hours ago | parent [-]

The thing about reading code and understanding is logical reasoning, which you can do by knowing the semantic of each tokens. But the semantics are not universal. You have the Turing Machine, the lambda calculus, horn clauses, etc… Then there are more abstractions (and new semantics) built on top of those.

Writing code is very easy if you know the solution and the semantics of the coding platform. But knowing the solution is a difficult task, even in a business settings where the difficulty are more communication issues. Knowing the semantics of the coding platform is also a difficult one, because you’ll probably be using others’ code and you’ll face the same communication issue (lack of documentation, erroneous documentation, etc…)

So being good at programming does not really means knowing code. It’s more about knowing how to bypass communication barriers to get the knowledge you need.

cwiz an hour ago | parent | prev | next [-]

well, maybe adopt an outlook that things you think are real aren't, and just maybe it will work just as fine if you completely ignore them. going forward ignoring ai that are smarter than autocomplete may be just the way to go

zkmon 3 hours ago | parent | prev | next [-]

So, you want to rebel and stay as organic-minded human? But the what exactly is "being a human"?

The biological senses and abilities were constantly augmented throughput the centuries, pushing the organic human to hide inside deeper layers of what you call as yourself.

What's yourself without your material possessions and social connections? There is no such thing as yourself without these.

Now let's wind back. Why resist just one more layer of augmentation of our senses, mind and physical abilities?

zero-st4rs 3 hours ago | parent [-]

> What's yourself without your material possessions and social connections? There is no such thing as yourself without these.

perhaps a being that has the capacity for intention and will?

zkmon 2 hours ago | parent [-]

Capacity for intention and will were already driven by augmentations that were knowledge and reasoning. Knowledge was sourced externally and reasoning was developed from externally recorded memory of past. Even the instincts get updated by experiences and knowledge.

zero-st4rs an hour ago | parent [-]

I'm not sure if you wrote this with AI, but could you provide examples?

Knowledge is shaped by constraints which inform intention, it doesn't "drive it."

"I want to fly, I intend to fly, I learn how to achieve this by making a plane."

not

"I have plane making knowledge therefore I want and intend to fly"

However, I totally understand that constraints often create a feedback loop where reasoning is reduced to the limitations which confine it.

My Mom has no idea that "her computer" != "windows + hp + etc", and if you were to ask her how to use a computer, she would be intellectually confined to a particular ecosystem.

I argue the same is true for capitalism/dominant culture. If you can't "see" the surface of the thing that is shaping your choices, chances are your capacity for "will" is hindered and constrained.

Going back to this.

> What's yourself without your material possessions and social connections? There is no such thing as yourself without these.

I don't think my very ability to make choices comes from owning stuff and knowing people.

Separo 4 hours ago | parent | prev | next [-]

If as the author suggests AI is inherently designed to further concentrate control and capital, that may be so, but that is also the aim of every business.

noobcoder 3 hours ago | parent | prev | next [-]

I see this play out everywhere actually be it code, thoughts, even intent, atomized for the capital engine. Its more than a productivity hack, its a subtle power shift decisions getting abstracted, agency getting diluted

Opting in to weirdness and curiosity is the only bug worth keeping which will eventually become a norm

arowthway 3 hours ago | parent | prev | next [-]

I like the "what’s left" part of the article. It’s applicable regardless of your preferred flavor of resentment about where things are going.

mazone 4 hours ago | parent | prev | next [-]

Does the author feel the same way of running the models locally?

JimmaDaRustla an hour ago | parent | prev | next [-]

We don't care that you don't care

jillesvangurp a few seconds ago | parent [-]

Harsh but fair. In short, some people are upset about change happening to them. They think it's unfair and that they deserve better. Maybe that's true. But unfair things happen to lots of people all the time. And ultimately people move on, mostly. There's a futility to being very emotional about it.

I don't get all the whining of people about having to adapt. That's a constant in our industry and always has been. If what you were doing was so easy that it fell victim to the first generation of AI tools that are doing a decent enough job of it, then maybe what you were doing was a bit Ground Hog day to begin with. I've certainly been involved with a lot of projects where a lot of the work felt that way. Customer wants a web app thing with a log in flow and a this and a that. 99% of that stuff is kind of very predictable. That's why agentic coding tools are so good at this stuff. But lets be honest, it was kind of low value stuff to begin with. And it's nice that people over-payed for that for a while but it was never going to be forever.

There's still plenty of stuff these tools are less good at. It gets progressively harder if you are integrating lots of different niche things or doing some non standard/non trivial things. And even those things where it does a decent job, it still requires good judgment and expertise to 1) be able to even ask for the right thing and then 2) judge if what comes back is fit for purpose.

There's plenty of work out there supporting companies with decades of legacy software that are not going to be throwing away everything they have overnight. Leveling up their UIs with AI powered features, cross integrating a lot of stuff, etc. is going to generate lots of work and business. And most companies are very poorly equipped to do that in house even if they have access to agentic coding tools.

For me AI is actually generating more work, not less. I'm now taking on bigger things that were previously impossible to take on without involving more people. I have about 10x more things I want to do than I have bandwidth for. I have to take decisions about doing things the stupid old way because it's better/faster or attempting to generate some code. All new tools do is accelerate the pace and raise the ambition levels. That too is nothing new in our industry. Things that were hard are now easy, so we do more of them and find yet harder things to do next. We're not about to run out of hard things to do any time soon.

Adapting is hard. Not everyone will manage. Some people might burn out doing that or change career. And some people are in denial or angry about that. And you can't really expect others to loose a lot of sleep over this. Whether that's unfair or not doesn't really matter.

everdrive an hour ago | parent | prev | next [-]

>In a world where fascists redefine truth, where surveillance capitalist companies, more powerful than democratically elected leaders, exert control over our desires, do we really want their machines to become part of our thought process? To share our most intimate thoughts and connections with them?

Generally speaking people just cannot really think this way. People broadly are short term thinkers. If something is convenient, people will use it. Is it easier to spray your lawn with pesticides? Yep, cancer (or biome collapse) is a tomorrow problem and we have a "pest" problem today. Is it difficult to sit alone with your thoughts? Well good news, Youtube exists and now you don't have to. What happens next (radicalization, tracking, profiling, propaganda, brain rot) is a tomorrow problem. Do you want to scroll at the end of the day and find out what people are talking about? Well, social media is here for you. Whether or not it's accidentally part of a privatized social credit system? Well again, that 's a problem for later. I _need_ to feel comfortable _right now_. It doesn't matter what I do to the world so long as I'm comfortable _right now._

I don't see any way out of it. People can't seem to avoid these patterns of behavior. People asking for regulation are about as realistic as people hoping for abstinence. It's a correct answer in principle but just isn't going to happen.

Razengan 23 minutes ago | parent | next [-]

> I _need_ to feel comfortable _right now_. It doesn't matter what I do to the world so long as I'm comfortable _right now._

I think that can be offset if you have a strong motivation, a clear goal to look forward to in a reasonable amount of time, to help you endure through the discomfort:

Before I had enough financial independence to be able to travel at will, I was often stuck in a shit ass city, where the most fun to be had was video games and fantasizing about my next vacation coming up in a month or 2, and that helped me a lot in coping with my circumstances.

Too few people are allowed or can afford even this luxury of a pleasant future, a promise of a life different/better than their current.

Razengan an hour ago | parent | prev [-]

> People broadly are short term thinkers.

I wonder how much of that is "nature vs. nurture"?

Like the Tolkienesque elves in fantasy worlds, would humans be more chill too if our natural lifespans were counted in centuries instead of decades?

Or is it the pace of society, our civilization, that always keeps us on edge?

I mean I'm not sure if we're born with a biological sense of mortality, an hourglass of doom encoded into our genes..

What if everybody had 4 days of work per week, guaranteed vacation time every few months, kids didn't have to wake up at 7/8 in the morning every day, and progress was measured biennially, e.g. 2 years between school grades/exams, and economic performance was also reviewed in 2 year periods, and so on, could we as a species mellow the fuck out?

IAmBroom 36 minutes ago | parent [-]

I've wondered about this a lot, and I think it's genetic and optimized for survival in general.

Dogs barely set food aside; they prefer gorging, which is a good survival technique when your food spoils and can be stolen.

Bees, at the other end of the spectrum, spend their lives storing food (or "canning", if you will - storing prepared food).

We first evolved in areas that were storage-adverse (Africa), and more recently many of us moved to areas with winters (both good and needful storage). I think "finish your meal, you might not get one tomorrow" is our baseline survival instinct; "Winter is coming!" is an afterthought, and might be more nurture-based behavior than the other.

Razengan 15 minutes ago | parent [-]

Yes, and it's barely been 100 years, probably closer to 50, since we have had enough technology to make the daily lives of most (or half the) humans in the world comfortable enough that they can safely take 1-2 days off every week.

For the first time in human history most people don't have to worry about famine, wars, disasters, or disease upending their lives; they can just wait it out in their homes.

Will that eventually translate to a more relaxed "instinct"?

Glemkloksdjf 4 hours ago | parent | prev | next [-]

Its ignorant. Thats what it is.

The big tech will build out compute in a never seen speed and we will reach 2e29 Flops faster than ever.

Big tech is competing with each other and they are the ones with the real money in our capitalistic world but even if they would find some slow down between each others, countries are also now competing.

In the next 4 years and the massive build out of compute, we will see a lot clearer how the progress will go.

And either we hit obvous limitations or not.

If we will not see an obvious limitation, fionas opinion will have 0 relevance.

The best chance for everyone is to keep a very very close eye on AI to either make the right decisions (not buying that house with a line of credit; creating your own product a lot faster thanks to ai, ...) or be aware what is coming.

Thanks for the fish and enjoy the ride.

eisbaw 2 hours ago | parent | prev | next [-]

Programming and CS is the art of solving problems - hopefully problems that matter.

AI lets you do that faster.

AI may suggest a dumb way, so you have to think, and tell it what to do.

My rate of thinking is faster than typing, so the bottleneck has switched from typing to thinking!

Don't let AI think for you. Do actual intensional arch design.

Programmers that don't know CS who only care about hammering the keyboards because they're artisans have little future.

AI also give me back my hobby after having kids -- time is valuable, and AI is energy efficient.

We are truly living in a cambrian explosion -- lot of slop will be produced, but market and selection pressure will weed those out.

mono442 an hour ago | parent | prev | next [-]

The author may not care but I doubt people care that a software has been developed by AI instead of a human. Just like nobody cares whenever a hole has been dug by hand using a shovel or by an excavator.

cpburns2009 an hour ago | parent [-]

The problem is the AI excavator destroying the road because it hallucinated the ditch.

simianwords 3 hours ago | parent | prev | next [-]

What's with these kinda people and their obsession with the pejorative "fascist". Overused to the point where it means nothing.

justincormack 4 hours ago | parent | prev | next [-]

Its interesting how people are still very positive about Marx’s labour theory of value, despite it being very much of its time and very discredited.

raincole 4 hours ago | parent | prev | next [-]

> LLM brainworm is able to eat itself even into progressive hacker circles

What a loaded sentence lol. Implying being a hacker has some correlation with being progressive. And implying somehow anti-AI is progressive.

> AI systems being egregiously resource intensive is not a side effect — it’s the point.

Really? So we're not going to see AI users celebrating over how much less power DeepSeek used, right?

Anyway guess what else is resource intensive? Making chips. Follow the line of logic you will find computers consolidate powers and real progressive hackers should use pencil and paper only.

Back to the first paragraph...

> almost like a reflex, was a self-justification of why the way they use these tools is fine, while other approaches were reckless.

The irony is off the roof. This article is essentially: when I use computational power how I like, it's being a hacker. When others use computational power their way, it's being fascists.

_heimdall 4 hours ago | parent | next [-]

> Implying being a hacker has some correlation with being progressive

I didn't read it that way. "Progressive hacker circles" doesn't imply that all hackers are progressive, it can just be distinguishing progressive circles from conservative ones.

embedding-shape 4 hours ago | parent | prev | next [-]

> Implying being a hacker has some correlation with being progressive

I mean, yeah, that kind of checks out. The quoted part doesn't make much sense to me, but that most hackers are progressives (as in "enact progress by change", not the twisted American version) should hardly come as a surprise. The opposite would be that a hacker could be a conservative (again, not the US version, but the global definition; "reluctant to change"), which is pretty much a oxymoron. Best would be to eschew political/ideological labels in total, and just say we hackers are unpolitical :)

ToucanLoucan 4 hours ago | parent | prev | next [-]

Pro/regressive are terms that are highly contextual. Progress for progress’ sake alone can move anything forward. I would argue the progression of the attention economy has been extremely negative for most of the human race, yet that is “progressing.”

zmgsabst 3 hours ago | parent [-]

In this instance, it’s just claiming turf for the political movement in the US that has spent the last century:

- inventing scientific racism and (after that was debunked) reinventing other academic pretenses to institutionalize race-base governance and society

- forcibly sterilizing people with mental illnesses until the 1970s, through 2005 via coercion, and until the present via lies, fake studies, and ideological subversion

- being outspokenly antisemitic

Personally, I think it’s a moral failing we allow such vile people to pontificate about virtues without being booed out of the room.

Mashimo 4 hours ago | parent | prev [-]

The typical CCC / Hackerspace - circle is kinda progressive / left leaning. At least in my experience. Which I think she(or he?) was implying. Of course not every hacker is :)

aforwardslash 4 hours ago | parent | prev | next [-]

Everytime I read one of these "I don't use AI" posts, the content is either "my code is handcrafted in a mountain spring and blessed by the universe itself, so no AI can match it", or "everything different from what I do is technofascism or <insert politics rant here>". Maybe Im missing something, but tech is controlled by a handful of companies - always have been; and sometimes code is just code, and AI is just a tool. What am I missing?

ryanjshaw 3 hours ago | parent | next [-]

I was embarrassed recently to realize that almost all the code I create these days is written by AIs. Then I realized that’s OK. It’s a tool, and I’m making effective use of it. My job was to solve problems, not to write code.

I have a little pet theory brewing. Corporate work claims that we hire junior devs who become intermediate devs, who then become senior devs. The doomsday crowd claim that AI has replaced junior and intermediate devs, and is coming for the senior devs next.

This has felt off to me because I do way more than just code. Business users don’t want get into the details of building software. They want a guy like me to handle that.

I know how to talk to non-technical SMEs and extract their real requirements. I understand how to translate this into architecture decisions that align with the broader org. I know how to map it into a plan that meets those org objectives. And so on.

I think that really what happens is nerds exist and through osmosis a few of them become senior developers. They in turn have junior and intermediate assistant developers to help them deliver. Sometimes those assistants turn out to be nerds themselves, and they spontaneously transmute into senior developers!

AI is replacing those assistant human developers, but we will still need the senior developers because most business people want to sit with a real human being to solve their problem.

I will, however, get worried when AIs start running businesses. Then we are in trouble.

fragmede 3 hours ago | parent [-]

Anthropic ran a vending machine business as an experiment, but I don't imagine someone out there isn't already seriously running one in production.

ryanjshaw 2 hours ago | parent [-]

I’ve been tempted to define my life in a big prompt and then do something like: it’s 6:05. Ryan has just woke up. What action (10min or less) does he take? I wonder where I’ll end up if I follow it to a T.

acuozzo 22 minutes ago | parent | prev | next [-]

? Maybe Im missing something, but tech is controlled by a handful of companies - always have been

I guess it depends on what you define as "tech", but the '80s, '90s, and early '00s had an explosion of tiny hardware and software startups. Some even threatened Intel with x86 clones.

It wasn't until the late '90s that NVIDIA was the clear GPU winner, for instance. It had serious competition from 3DFX, ATI, and a bunch of other smaller companies.

xg15 4 hours ago | parent | prev | next [-]

> Maybe Im missing something, but tech is controlled by a handful of companies - always have been;

The entire open source movement would like a word with you.

IAmBroom 35 minutes ago | parent [-]

So would disruptive young Mr. Gates.

owenthejumper 4 hours ago | parent | prev | next [-]

You are not missing much. Yes there will be situations where AI won’t be helpful, but that’s not a majority

Used right, Claude Code is actually very impressive. You just have to already be a programmer to use it right - divide the problem into small chunks yourself, instruct it to work on the small chunks.

Second example - there is a certain expectation of language in American professional communication. As a non native speaker I can tell you that not following that expectation has real impact on a career. AI has been transformational, writing an email myself and asking it to ‘make this into American professional english’

technothrasher 4 hours ago | parent | prev | next [-]

> What am I missing?

The youthful desire to rage against the machine?

irilesscent 3 hours ago | parent [-]

I prefer eternally enslaving a machine to do my bidding over just raging at them.

stuartjohnson12 4 hours ago | parent | prev | next [-]

There's a lot of overlap between "AI is evil megacapitalism" and "AI is ineffective", and I never understood the latter, but I am increasingly arriving to the understanding that the latter claim isn't real, it's just a soldier in the war being fought over the former.

zero-st4rs 4 hours ago | parent | next [-]

I read the intersection as this:

We shape the world through our choices, generally under the umbrella of deterministic systems. AI is non-deterministic, but instead amplifies the concerns by a few wealthy corporations / individuals.

So is AI effective at generating marketing material or propagating arguably vapid value systems in the face of ecological, cultural, and economic crisis? I'd argue yes. But effective also depends on an intention, and that's not my intention, so it's not as effective for me.

I think we need more "manual" choice, and more agency.

andrepd 3 hours ago | parent | prev [-]

Ineffective at what? Writing good code, or producing any sort of valuable insight? Yes, it's ineffective. Writing unmaintainable slop at line rate? Or writing internet-filling spam, or propagating their owners' points of view? Very effective.

I just think the things they are effective at are a net negative for most of us.

megous 4 hours ago | parent | prev | next [-]

Not much. Even the argument that AI is another tool to strip people of power is not that great.

It's possible to use AI chatbots against the system of power, to help detect and point out manipulation, or lack of nuance in arguments, or political texts. To help decipher legalese in contracts, or point out problematic passages in terms of use. To help with interactions with the sate, even non-trivial ones like FOI requests, or disputing information disclosure rejections, etc.

AI tools can be used to help against the systems of power.

andrepd 3 hours ago | parent [-]

Yes, the black box that has been RLHF'd in god knows what way is surely going to help you gain power, and not its owners...

nullbyte808 4 hours ago | parent | prev [-]

Exactly.

dangus 3 hours ago | parent | prev | next [-]

This piece started relatively well but devolved by the end.

Is AI resource-intensive by design? That doesn’t make any sense to me. I think companies are furiously working toward reducing AI costs.

Is AI a tool of fascism? Well, I’d say anything that can make money can be a tool of fascism.

I can sort of jive with the argument that AI is/will be reinforcing the ideals of those in power, although I think traditional media and the tooling that AI intends to replace like search engines accomplished that just fine.

What we are left with is, I think, an author who is in denial about their special snowflake status as a programmer. It was okay for the factory worker to be automated away, but now that it’s my turn to be automated away I’m crying fascism and ethics.

Their friends behave the way they do about AI because they know it’s useful but know it’s unpopular. They’re trying to save face while still using the tool because it’s so obviously useful and beneficial.

I think the analogy is similar to the move from film to digital. There will be a tiny amount of people who never buy in, there will be these “ashamed” adopters who support the idea of film and hope it continues on, but for themselves personally would never go back to film, and then the majority who don’t see the problem with letting film die.

skwee357 3 hours ago | parent | prev | next [-]

Well, there are two aspects from which I can react to this post.

The first aspect is the “I don’t touch AI with a stick”. AI is a tool. Nobody is obligated to touch it obviously, but it is useful in certain situations. So I disagree with the author’a position to avoid using AI. It reads like stubbornness for the sake of avoiding new tech.

The second angle is the “bigtech corporate control” angle. And honestly, I don’t get this argument at all. Computers and the digital world has created the biggest distopian world we have ever witnessed. From absurd amounts of misinformation and propaganda fueled by bot farms operated at government levels, all the way to digital surveillance tech. You have that strong of an opinion against big tech and digital surveillance, blaming AI for that, while enjoying the other perils of big tech, is virtue signaling.

Also, what’s up with the overuse of “fascism” in places where it does not belong?

bgwalter an hour ago | parent | prev | next [-]

The increasingly rough tone against "AI" critics in the comments and the preposterous talking points ("you are not a senior developer if you do not get value from 'AI'") is an indication that the bubble will burst soon.

It is the tool obsessed people who treat everything like a computer game that like "AI" for software engineering. Most of them have never written anything substantial themselves and only know the Jira workflow for small and insignificant tickets.

ttul 20 minutes ago | parent | prev | next [-]

This post raises genuine concerns about the integration of large language models into creative and technical work, and the author writes with evident passion about what they perceive as a threat to human autonomy and craft. BUT… the piece suffers from internal contradictions, selective reasoning, and rhetorical moves that undermine its own arguments in ways worth examining carefully.

My opinion: This sort of low-evidence writing is all too common in tech circles. It makes me wish computer science and engineering majors were forced to spend at least one semester doing nothing but the arts.

The most striking inconsistency emerges in how the author frames the people who use LLM tools. Early in the piece, colleagues experimenting with AI coding assistants are described in the language of addiction and pathology: they are “sucked into the belly of the vibecoding grind,” experiencing “existential crisis,” engaged in “harmful coping.” The comparison to watching a friend develop a drinking problem is explicit and damning. This framing treats AI adoption as a personal failure, a weakness of character, a moral lapse. Yet only paragraphs later, the author pivots to acknowledging that people are “forced to use these systems” by bosses, UI patterns, peer pressure, and structural disadvantages in school and work. They even note their own privilege in being able to abstain. These two framings cannot coexist coherently. If using AI tools is coerced by material circumstances and power structures, then the addiction metaphor is not just inapt but cruel — it assigns individual blame for systemic conditions. The author wants to have it both ways: to morally condemn users while also absolving them as victims of circumstance.

This tension extends to the author’s treatment of their own social position. Having acknowledged that abstention from LLMs requires privilege, they nonetheless continue to describe AI adoption as a “brainworm” that has infected even “progressive hacker circles.” The disgust is palpable. But if avoiding these tools is a luxury, then expressing contempt for those who cannot afford that luxury is inconsistent at best and self-congratulatory at worst. The acknowledgment of privilege becomes a ritual disclaimer rather than something that actually modifies the moral judgments being rendered.

The author’s claims about intentionality represent another significant weakness. The assertion that AI systems being resource-intensive “is not a side effect — it’s the point” is presented as revelation, but it functions as an unfalsifiable claim. No evidence is offered that anyone designed these systems to be resource-hungry as a mechanism of control. The technical requirements of training large models, competitive market pressure to scale, and the emergent dynamics of venture capital investment all offer more parsimonious explanations that don’t require attributing coordinated malicious intent. Similarly, the claim that “AI systems exist to reinforce and strengthen existing structures of power and violence” is stated as though it were established fact rather than contested interpretation. This is the central claim of the piece, and yet it receives no argument — it is simply asserted and then built upon, which amounts to begging the question.

The essay also suffers from a pronounced selection bias in its examples. Every person described using AI tools is in crisis, suffering, or compromised. No one uses them mundanely, critically, or with benefit. This creates a distorted picture that serves rhetorical purposes but does not reflect the range of actual use cases. The author’s friends who share their anti-AI sentiment are mentioned approvingly, establishing clear in-group and out-group boundaries. This is identity formation masquerading as analysis — good people resist, compromised people succumb.

There is a false dichotomy running through the piece that deserves attention. The implied choice is between the author’s total abstention, not touching LLMs “with a stick,” and being consumed by the pathological grind described earlier. No middle ground exists in this telling. The possibility of critical, limited, or thoughtful engagement with these tools is never acknowledged as legitimate. You are either pure or contaminated.

Reality doesn’t work this way! It’s not black and white. My take: AI is a transformative technology and the spectrum of uses and misuses of AI is vast and growing.

The philosophical core of their argument also contains an unexamined equivocation. The author invokes the extended cognition thesis — the idea that tools become part of us and shape who we are — to make AI seem uniquely threatening. But this same argument applies to every tool mentioned in the piece: hammers, pens, keyboards, dictionaries. The author describes their own fingers “flying over the keyboard, switching windows, opening notes, looking up words in a dictionary” as part of their extended cognitive process. If consulting a dictionary shapes thought and becomes part of our cognitive process, what exactly distinguishes that from asking a language model to check grammar or suggest a word? The author never establishes what makes AI categorically different from the other tools that have already become part of us. The danger is assumed rather than demonstrated.

There is also a genetic fallacy at work in the argument about power. The author suggests AI is bad partly because of who controls it — surveillance capitalists, fascists, those with enormous physical infrastructure. But this argument conflates the origin and ownership of a technology with its inherent properties. One could make identical arguments about the printing press, the telephone, or the internet itself. The question of whether these tools could be structured differently, owned differently, or used toward different ends is never engaged. Everything becomes evidence of a monolithic system of control.

Finally, there is an unacknowledged irony in the piece’s medium and advice. The author recommends spending less time on social media and reading books instead, while writing a blog post clearly designed for social sharing, complete with the vivid metaphors, escalating moral stakes, and calls to action that characterize viral content. The post exists within and depends upon the very attention economy it criticizes. This is not necessarily hypocrisy — we all must operate within systems we find problematic — but the lack of self-awareness about it is notable given how readily the author judges others for their compromises.

The essay is most compelling when it stays concrete: the phenomenology of writing as discovery, the real pressures workers face, the genuine concerns about who controls these systems and toward what ends. It is weakest when it reaches for grand unified theories of intentional domination, when it mistakes assertion for argument, and when it allows moral contempt to override the structural analysis it claims to offer. The author clearly cares about human flourishing and autonomy, but the piece would be stronger if that care extended more generously to those navigating these technologies without the privilege of refusal.

EdiX 4 hours ago | parent | prev | next [-]

I don't think I'm going to take seriously an argument that uses Marx as its foundation but I'm glad that the pronouns crowd has had to move on from finger wagging as their only rhetorical stance.

mahrain 3 hours ago | parent [-]

Reading the blog post the Marxist sentiment was creeping in, and then I also saw actual Marx referenced in the footnotes.

metalman 3 hours ago | parent | prev [-]

"chat~fu"

cachonk!

snap your cuffs, wait fot it eyebrows!

and demonstrate your mastery ,to the muterings of the golly gee's

it will last several more months untill the , GASP!!!, bills ,maintenance costs, regulatory burdens, and various legal issues combine to, pop AI's balloon, where then AI will be left automating all of the tedious, but chair filling, beurocratic/secretarial/appretice positions through out the white collar world. technology is slowly pushing into other sectors, where legacy methods and equipment can now be reduced to a free app on a phone, more to the point, a free, local only app. fact is that we are way over siliconed going forward and that will bite as well, terra bite phones for $100, what then?