Remix.run Logo
I Quit. The Clankers Won(dbushell.com)
299 points by domysee 7 hours ago | 306 comments
Waterluvian 3 hours ago | parent | next [-]

Improving developer skills is not valuable to your company. They don't tell a customer how many person-hours of engineering talent improvement their contract is responsible for. They just want a solved problem. Some companies comprehend how short-sighted this is and invest in professional development in one way or another. They want better engineers so that their operations run better. It's an investment and arguably a smart one.

Adoption of AI at a FOMO corporate pace doesn't seem to include this consideration. They largely want your skills to atrophy as you instead beep boop the AI machine to do the job (arguably) faster. I think they're wrong and silly and any time they try to justify it, the words don't reconcile into a rational series of statements. But they're the boss and they can do the thing if they want to. At work I either do what they want in exchange for money or I say no thank you and walk away.

Which led me to the conclusion I'm currently at: I think I'm mostly just mourning the fact that I got to do my hobby as a career for the past 15 years, but that’s ending. I can still code at home.

pfisherman 2 hours ago | parent | next [-]

This is going to catch some heat, but what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents?

I saw something similar in ML when neural nets came around. The whole “stack moar layerz” thing is a meme, but it was a real sentiment about newer entrants into the field not learning anything about ML theory or best practices. As it turns out, neural nets “won” and using them effectively required development and acquisition of some new domain knowledge and best practices. And the kids are ok. The people who scoffed at neural nets and never got up to speed not so much.

Edit: as an aside, I have learned plenty from reviewing coding agent generated implementations of various algorithms or methods.

MetaWhirledPeas 2 hours ago | parent | next [-]

> what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents?

Well, it's not. There's a small moat around that right now because the UX is still being ironed out, but in a short while able to use coding agents will be the new able to use Excel.

What will remain are the things that already differentiate a good developer from a bad one:

- Able to review the output of coding agents

- Able to guide the architecture of an application

- Able to guide the architecture of a system

- Able to minimize vulnerabilities

- Able to ensure test quality

- Able to interpret business needs

- Able to communicate with stakeholders

jonas21 a few seconds ago | parent | next [-]

So, in other words, the skills needed to effectively use coding agents.

rkapsoro an hour ago | parent | prev | next [-]

I think you're agreeing with him. All of the things you just listed are key senior developer skills.

jnovek an hour ago | parent | prev [-]

> Able to review the code output of coding agents

That probably won’t be necessary in a few years.

circlefavshape an hour ago | parent | next [-]

It's necessary for devs right now, no matter how good they are, and it's those devs' code the models are trained on

prewett 23 minutes ago | parent [-]

Even worse, the training set probably includes a lot of code that needed review but didn't get it...

rafterydj an hour ago | parent | prev | next [-]

I see this line of thought put out there many times, and I've been thinking: why do people do anything at all? What's the point? If no one at all is even reviewing the output of coding agents, genuinely, what are we doing as a society?

I fail to see how we transition society into a positive future without supplying means of verifying systemic integrity. There is a reason that Upton Sinclair became famous: wayward incentives behind closed doors generally cause subpar standards, which cause subpar results. If the FDA didn't exist, or they didn't "review the output", society would be materially worse off. If the whole pitch for AI ends with "and no one will even need to check anything" I find that highly convenient for the AI industry.

ndriscoll an hour ago | parent [-]

You could e.g. write specs and only review high level types plus have deterministic validation that no type escapes/"unsafe" hatches were used, or instruct another agent to create adversarial blackbox attempts to break functionality of the primary artifact (which is really just to say "perform QA").

As a simple use-case, I've found LLMs to be much better than me at macro programming, and I don't really need to care about what it does because ultimately the constraint is just that it bends the syntax I have into the syntax I want, and things compile. The details are basically irrelevant.

surajrmal 11 minutes ago | parent | next [-]

Code quality will impact the effectiveness of ai. Less code to read and change in subsequent changes is still useful. There was a while where I became more of a paper architect and stopped coding for a while and I realized I wasn't able to do sufficient code reviews anymore because I lacked context. I went back into the code at some point and realized the mess my team was making and spent a long while cleaning it up. This improved the productivity of everyone involved. I expect AI to fall into a similar predicament. Without first hand knowledge of the implementation details we won't know about the problems we need to tell the AI to address. There are also many systems which are constrained in terms of memory and compute and more code likely puts you up against those limits.

rafterydj 27 minutes ago | parent | prev [-]

I mean, sure, for programming macros. Or programming quick scripts, or type-safe or memory-safe programs. Or web frontends, or a11y, or whatever tasks for which people are using AI.

But if you peel back that layer to the point where you are no longer discussing the code, and just saying "code X that does Y"... how big is X going to get without verifying it? This is a basic, fundamental question that gets deflected by evaluating each case where AI is useful.

When you stop being specific about what the AI is doing, and switch to the general tense, there is a massive and obvious gap that nobody is adequately addressing. I don't think anyone would say that details are irrelevant in the case of life-threatening scenarios, and yet no one is acknowledging where the logical end to this line of thinking goes.

falkensmaize an hour ago | parent | prev [-]

They will still be turning out the same problematic code in a few years that they do now, because they aren’t intelligent and won’t be intelligent unless there is a fundamental paradigm shift in how an LLM works.

I use LLMs with best practices to program professionally in an enterprise every day, and even Opus 4.6 still consistently makes some of the dumbest architectural decisions, even with full context, complete access to the codebase and me asking very specific questions that should point it in the right direction.

stevepotter 21 minutes ago | parent [-]

I keep hearing “they aren’t intelligent” and spit out “crap code”. That’s not been my experience. LLMs prevented and also caught intricate concurrency issues that would have taken me a long time.

I just went “hmmm, nice” and went on. The problem there is that I didn’t get that sense of accomplishment I crave and I really didn’t learn anything. Those are “me” problems but I think programmers are collectively grappling with this.

dspillett 2 hours ago | parent | prev | next [-]

> This is going to catch some heat, but what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents?

If it does go as far that way as many seem to expect (or, indeed, want), then most people will be able to do it, there will be a dearth of jobs and many people wanting them so it'll be a race to the bottom for all but the lucky few: development will become a minimum wage job or so close to that it'll make no odds. If I'm earning minimum wage it isn't going to be sat on my own doing someone else's prompting, I'll find a job that involves not sitting along in front of a screen and reclaim programming for hobby time (or just stop doing it at all, I have other hobbies to divide my time between). I dislike (effectively) being a remote worker already, but put up with it for the salary, if the salary goes because “AI” turns it into a race-to-the-bottom job then I'm off.

Conversely: if that doesn't happen then I can continue to do what I want, which is program and not instruct someone else (be it a person I manage or an artificial construct) to program. I'm happy to accept the aid of tools for automation and such, I've written a few of my own, but there is a line past which my interest will just vanish.

falkensmaize an hour ago | parent [-]

What the people excited about the race to the bottom scenario don’t seem to understand is that it doesn’t mean low skill people will suddenly be more employable, it means fewer high skill people will be employable.

No one will be eager to employ “ai-natives” who don’t understand what the llm is pumping out, they’ll just keep the seasoned engineers who can manage and tame the output properly. Similarly, no one is going to hire a bunch of prompt engineers to replace their accountants, they’ll hire fewer seasoned accountants who can confidently review llm output.

rekrsiv an hour ago | parent | prev | next [-]

The endgame in programming is reducing complexity before the codebase becomes impossible to reason about. This is not a solved problem, and most codebases the LLMs were trained on are either just before that phase transition or well past it.

Complexity is not just a matter of reducing the complexity of the code, it's also a matter of reducing the complexity of the problem. A programmer can do the former alone with the code, but the latter can only be done during a frank discussion with stakeholders.

A vibe coder using an LLM to generate complexity will not be able to tell which complexity to get rid of, and we don't have enough training data of well-curated complexity for LLMs to figure it out yet.

trollbridge an hour ago | parent [-]

No kidding. So far the complexity introduced by LLM-generated code in my current codebase has taken far more time to deal with than the hand-written code.

Overall, we are trying to "silo" LLM-generated code into its own services with a well-defined interface so that the code can just be thrown away and regenerated (or rewritten by hand) because maintaining it is so difficult.

bonoboTP an hour ago | parent | prev | next [-]

I also remember a similar wave around 10-15 years ago regarding ML tooling and libraries becoming more accessible, more open source releases etc. People whose value add was knowing MATLAB toolboxes and keeping their code private got very afraid when Python numpy, scikit learn and Theano etc came to the forefront. And people started releasing the code with research papers on github. Anyone could just get that working code and start tweaking the equations put different tools and techniques together even if you didn't work in one of those few companies or didn't do an internship at a lab who were in the know.

Or other people who just kept their research dataset private and milked it for years training incrementally better ML models on the same data. Then similar datasets appeared openly and they threw a hissy fit.

Usually there are a million little tricks and oral culture around how to use various datasets, configurations, hyperparameters etc and papers often only gave the high level ideas and math away. But when the code started to become open it freaked out many who felt they won't be able to keep up and just wanted to keep on until retirement by simply guarding their knowledge and skill from getting too known. Many of them were convinced it's going to go away. "Python is just a silly, free language. Serious engineers use Matlab, after all, that's a serious paid product. All the kiddies stacking layers in Theano will just go away, it's just a fad and we will all go back to SVM which has real math backing it up from VC theory." (The Vapnik-Chervonenkis kind, not the venture capital kind.)

I don't want to be too dismissive though. People build up an identity, like the blacksmith of the village back in the day, and just want to keep doing it and build a life on a skill they learn in their youth and then just do it 9 to 5 and focus on family etc. I get it. But wishing it won't make it so.

Talented, skilled people with good intuition and judgements will be needed for a long time but that will still require adapting to changing tools and workflows. But the bulk of the workforce is not that.

poody 15 minutes ago | parent [-]

This is so true... I am having issues with the change right now.. being older and trying to incorporate agentic workflow into MY workflow is difficult as I have trust issues with the new codebase.. I do have good people skills with my clients, but my secret sauce was my coding skilz.. and I built my identity around that..

tonyedgecombe 2 hours ago | parent | prev | next [-]

Using a coding agent seems quite low skill to me. It’s hard to see it becoming a differentiator. Just look at the number of people who couldn’t code before and are suddenly churning out work to confirm that.

bachmeier an hour ago | parent [-]

> Using a coding agent seems quite low skill to me.

I agree if that's all you can do. Using a coding agent to complement a valuable domain-specific skill is gold.

mcdeltat 2 hours ago | parent | prev | next [-]

I think your argument is predicated on LLM coding tools providing significant benefit when used effectively. Personally I still think the answer is "not really" if you're doing any kind of interesting work that's not mostly boilerplate code writing all day.

dasil003 2 hours ago | parent | next [-]

Define interesting. In my experience most business logic is not innovative or difficult, but there are ways to do it well or ways to do it terribly. At the senior levels I feel 90% of the job is deciding the shape of what to build and what NOT to build. I find AI very useful in exploring and trying more things but it doesn’t really change the judgment part of the job.

xeromal 42 minutes ago | parent | prev [-]

How much of software programmer work is interesting? A fraction of a percent? I'd argue most of us including most startups work on things that help make businesses money and that's pretty "boring" work.

windward 2 hours ago | parent | prev | next [-]

Many of those skills have temporary value before they're incorporated into the models/harnesses

ozozozd an hour ago | parent | prev | next [-]

There was a moment we thought JS had won. And then crypto. I personally believed low-level development was done.

underlipton an hour ago | parent [-]

Crypto did win, just not where you're looking.

MrDarcy 2 hours ago | parent | prev | next [-]

Not sure why this would catch heat rationally speaking. It is quite clear in a professional setting effective use of coding agents is the most important skill to develop as an individual developer.

It’s also the most important capability engineering orgs can be working on developing right now.

Software Engineering itself is being disrupted.

anticorporate an hour ago | parent | prev | next [-]

I'd offer an edit that the most important skill may be knowing when the agent is wrong.

There's so much hand wringing about people not understanding how LLMs work and not nearly enough hand wringing about people not understanding how computer systems work.

shmerl 14 minutes ago | parent | prev | next [-]

I'd say viewing it as most important is pretty unprofessional. But isn't it the point of this extreme AI push? To replace professional skills with dummy parrots.

mxkopy 2 hours ago | parent | prev [-]

I don’t think it could be the most important skill to have. The most common, and the most standardized one for sure, but if coding agents are doing fundamental R&D or running ops then nobody needs skills anyway.

> As it turns out, neural nets “won”

> The people who scoffed at neural nets and never got up to speed not so much.

I get the feeling you don’t know what you’re talking about. LLMs are impressive but what have they “won” exactly? They require millions of dollars of infrastructure to run coming around a decade after their debut, and we’re really having trouble using them for anything all that serious. Now I’m sure in a few decades’ time this comment will read like a silly cynic but I bet that will only be after those old school machine learning losers come back around and start making improvements again.

KronisLV an hour ago | parent | prev | next [-]

> Improving developer skills is not valuable to your company. They don't tell a customer how many person-hours of engineering talent improvement their contract is responsible for. They just want a solved problem.

Doesn't credentialism kinda throw a spanner in that - where it's not enough to have people with a good track record of solving issues, but then someone along the way says "Yeah, we'd also like the devs who'll work on the project to have Java certs." (I've done those certs, they're orthogonal to one's ability to produce good software)

Might just be govt. projects or particular orgs where such requirements are drawn up by dinosaurs, go figure (as much as I'd love software development to be "real" engineering with best practices spanning decades, it's still the Wild West in many respects). Then again, the same thing more or less applies to security, a lot of it seems like posturing and checklists (how some years back the status quo was that you'll change your password every 30-90 days because IT said so) instead of the stuff that actually matters.

Not to detract from the point too much, but I've very much seen people not care about solving problems and shipping fast as stuff like that, or covering their own asses by paying for Oracle support or whatever (even when it gets in the way of actually shipping, like ADF and WebLogic and the horror that is JDeveloper).

But yeah, I think many companies out there don't care that much about the individual growth of their employees that much, unless they have the ability to actually look further into the future - which most don't, given how they prefer not to train junior devs into mid/senior ones over years.

simonw 3 hours ago | parent | prev | next [-]

> Improving developer skills is not valuable to your company

Every company I've ever worked at has genuinely believed in and invested in improving developer skills.

Supermancho 2 hours ago | parent | next [-]

I've worked for 35ish companies (contract and fulltime), largely on the west coast of the US. I have experienced the lip service, from the vast majority. I have experienced maybe 2 or 3 earnest attempts at growing engineer skills through subsidized admission/travel to talks, tools, or invited instructors.

tasuki 2 hours ago | parent | next [-]

> I've worked for 35ish companies

It seems they were correct not to invest in your skills.

I've worked for six companies over almost 20 years. The majority invested in my skills, and I hope that investment has paid off for them!

dspillett 2 hours ago | parent | next [-]

I've worked for five companies, on the same products (well, variations there-of over time), for 25 years, due to take-overs (I nearly left ~10 years ago due to management numskullery, but a timely buy-out of the bit I worked for fixed my problems while the rest of the company died off).

Hanging around for a while (a long while) doesn't necessarily mean dedication worth investing in, it could just be that I have a shocking lack of ambition :)

ojbyrne an hour ago | parent | prev | next [-]

Perhaps the lack of investment in their skills was the cause for the commenter’s job hopping, not the effect.

kjksf 23 minutes ago | parent | next [-]

It's all so vague. "lack of investment in their skill".

You just spent $250k and 5 years in college learning stuff.

You get hired to do a job for money.

What "investment" do you expect company to do?

Give me number of weeks and amount of dollars per year and tell me how it stacks against $250k and 5 years that you just spent?

If you want to learn on the job, shouldn't YOU be paying the company for teaching you, like you pay college to teach you?

shagie an hour ago | parent | prev [-]

Consider the rate of job hopping that would be evident on that resume. I'm not sure how many companies would be willing to invest in sending a FTE who stays somewhere for likely less than a year to a conference or say "Ok, you an spend 20% of your time improving your skills."

What is more likely with the 35 number is that these are multiple simultaneous contracts. When working as a contractor you're fixing that problem or that project. The company isn't going to have you around for longer than a month after it's been fixed and documented.

There's no reason to spend company resources on training a person any more than there's reason for you to pay a plumber to be reading "learn to be an electrician in 10 days" while they're supposed to be working on fixing the sink or doing the plumbing for new construction.

oblio 2 hours ago | parent | prev [-]

If you include consulting that could easily be 10 companies a year...

lsaferite an hour ago | parent | next [-]

Why would a company you are consulting for invest in training you up exactly? They are paying a consultant with the expectation that they are bringing the knowledge.

21asdffdsa12 an hour ago | parent [-]

Eh, consultants are brought in not for the knowledge or advice! Management already knows what todo and where to go- they just want somebody external sanctify the decision!

tasuki 2 hours ago | parent | prev [-]

Could easily be, yes. And they'd be right not to invest in OP's skills.

(To explicitly state the obvious: I'm not saying OP's a bad person for doing this, just saying the employers were right not to invest in them...)

kjksf 27 minutes ago | parent | prev | next [-]

What is your expectation, exactly?

In US you go to college for 4-5 years and pay $50k per year. Or more.

You pay to learn. A lot of money, a lot of time.

Then you get a job, where the idea is that you get paid for doing work and you expect the employer to do what?

You seem to expect that not only you won't be doing the things you're being paid to do but the employer will pay for your education on company's time.

How many weeks per year of time off do you expect to get from a company?

You'll either say a reasonable number, like 1 or 2, which is insignificant to the time you supposedly spent learnings (5 years). You just spend 250 weeks supposedly learning but 1 or 2 weeks a year is supposed to make a difference?

Or you'll say unreasonable number (anything above 2 weeks) because employment is not free education.

ndriscoll 2 hours ago | parent | prev | next [-]

What exactly do you have in mind? The large companies I've worked at had book subscriptions, internal training courses, and would pay for school. Personally I don't see the point of any of it. For software engineering, the info you need is all online for free. You can go download e.g. graduate level CS courses on youtube. MIT OCW has been around for almost a quarter century now. IME no one's going to stop you from spending a couple hours a week of work time watching lectures (at least if you're fulltime). Now at least at my company, we have unlimited use of codex, which you can ask for help explaining things to you. I also don't really see how attending conferences relates to skill improvement. Meanwhile, I've been explicitly told by managers that spending half my time mentoring people sounds reasonable.

I can't understand what people are looking for when they talk about lack of investment into training for engineers. It's not the kind of job where someone can train you. It's like an executive complaining they aren't trained. You're the one who's supposed to be coming up with answers and making decisions. You need to spend time on self-motivated learning/discovering how to better do your work. Every company I've been at big or small assumes that's part of the job.

PurpleRamen an hour ago | parent [-]

> For software engineering, the info you need is all online for free.

Guided learning with instant feedback can be much more efficient than just consuming and tinkering on your own. Depends on the topic, the teacher and situation of course. The quality of available material is also all over the place, and not every topic has enough material, or anything at all.

ndriscoll 24 minutes ago | parent [-]

For foundational knowledge, there's been high quality information for free from MIT, Harvard, Stanford, Yale, etc. out there for years. Just look there first. If you're beyond that, you're beyond the canon that you can "learn" and closer to needing to follow/participate in SOTA R&D. And if you need a more structured environment, that's why people go to school. Engineering jobs expect you're at the level of someone who's completed undergrad. Part of an undergrad degree is getting used to seeking out resources yourself and learning from them instead of having a teacher spoon-feed it.

Again I just don't have any idea of what training people expect. The entire job is basically "we might have some idea of what we want to do, but no one here knows the details. Go figure it out."

What kind of guided learning would you want? How to solve problems? That's what 16 years of school was for!

PurpleRamen an hour ago | parent | prev | next [-]

Care to explain a bit more?

With 35 companies, that would be around 1-2 years per company on average if you are retired or near retirement. I doubt any company is seriously investing in a worker who would likely be gone the next year. Getting lip service seems already good deal at that point.

pc86 an hour ago | parent [-]

I mean the comment says "contract" right there; you can easily be on a contract with multiple companies simultaneously. When I was freelancing full-time ca. 2010-2013 or so I often had 5-6 active contracts running simultaneously. I probably worked for 15-20 different companies total in that 3-4 year span.

PurpleRamen an hour ago | parent [-]

Yes, likely, but make even less sense, as you can't except support for education as a freelancer. I mean a freelancers whole purpose is to sell skill and be gone when the job is finished. You are from the beginning just an expendable tool they don't want to polish outside the scope of the job.

threetonesun 2 hours ago | parent | prev | next [-]

These two statements go hand in hand though. While I do believe companies could take the altruistic take of training people whether or not they stay, and some places do, they're certainly not going to make the effort for someone who has clear markers of being someone who will leave anyway.

bdangubic 2 hours ago | parent | prev | next [-]

This percentage is probably right on the money!

aduwah 2 hours ago | parent | prev [-]

Hard same over 20 years

tonyedgecombe 3 hours ago | parent | prev | next [-]

Every company I worked for didn’t give a shit about my skills. They just wanted to solve the problem in front of them and if they couldn’t then they would hire someone in with the right skills. Improving my skills was seen as a risk as I might leave.

catlifeonmars 2 hours ago | parent [-]

I’ve had both experiences, sometimes at the exact same company.

Waterluvian 3 hours ago | parent | prev | next [-]

That’s been my experience, too. But now I get a sort of, “I dunno. Maybe don’t use AI on Fridays?”

There doesn’t seem to be a plan for maintaining that culture.

jasomill 2 hours ago | parent | prev | next [-]

Given the rest of the paragraph, I believe the parent is trying to say that merely improving developer skills is not valuable to the company, not that improving developer skills cannot provide value in terms of improved work product, morale, retention, etc.

kajaktum 2 hours ago | parent | prev | next [-]

You must be lucky then.

simonw 6 minutes ago | parent [-]

Realizing now that I've been both lucky and selective - I've always picked the kind of employers where this culture is baked in.

01284a7e 2 hours ago | parent | prev [-]

The opposite is true in my case - though 1 organization that had a small budget for things like AWS certs. I remember almost everyone who would get those certificates would never really learn anything from it either. They would just take the exams.

v3xro an hour ago | parent | prev | next [-]

> I think I'm mostly just mourning the fact that I got to do my hobby as a career for the past 15 years, but that’s ending. I can still code at home.

It could hardly have been a hobby if people were willing to pay you for it (and good rates too)?

I will rephrase it like this - the market has shifted away from providing value to the customers of said companies to pumping itself instead and it does not need to employ people for that. Simple as.

coldtea 2 hours ago | parent | prev | next [-]

>Improving developer skills is not valuable to your company

What's valuable to a company is not necessarily what's valuable to the customers or even more so, to a civilization at large.

clvx an hour ago | parent | prev | next [-]

There's a catch. Do not break customer trust. Many people are just tinkering with solving the problem but the indirect effects have not been tackled either by the tool, processes or just some human thinking.

catlifeonmars 2 hours ago | parent | prev | next [-]

Maybe I’m just getting extremely lucky, but I don’t use AI to code at work and I’m still keeping up with my peers who are all Clauded up. I do a lot of green field network appliance design and implementation and have not felt really felt the pressure in that space.

I do use Claude code at home maybe a couple hours a week, mostly for code base exploration. Still haven’t figured out how to fully vibe code: the generated code just annoys me and the agents are too chatty. (Insert old man shaking fist at cloud).

lioeters an hour ago | parent | next [-]

We're witnessing a divergence between Coders and Clauders, with the latter dominating the market at a lower cost of labor + subscription fee to the almighty AI providers. Coders may be called in, hopefully with better renumeration, to review and debug the massive amount of code being generated. Either that or they will also be replaced by specially trained/prompted language models doing the review.

Bridged7756 an hour ago | parent | prev | next [-]

In the future Claude will keep a tight ship on dissenters. If your monthly quota doesn't exceed the 10k worth of tokens your employer will be notified and you will be flagged as a "dissenter". Your lease will be cancelled, because who would trust someone ignorant enough to not use LLMs in their daily life, and you'll be vetoed from the field for life, for clanker companies will proclaim that anyone who doesn't use LLM-assisted coding should be culled and so they'll run a tight ship.

And executives will get millions in bonuses for figuring it out, and the remaining programmers, probably one or two, will raise their necks over who's the best prompter and how everyone else was dumber than them for not figuring it out.

ej88 18 minutes ago | parent [-]

ai skeptic fanfic evolves in fascinating ways every day

catlifeonmars 2 minutes ago | parent [-]

[delayed]

jmmv 2 hours ago | parent | prev [-]

> the generated code just annoys me and the agents are too chatty

I’ve eyerolled way less with Codex CLI and the GPT models than with Claude.

catlifeonmars a minute ago | parent [-]

[delayed]

bluecheese452 an hour ago | parent | prev | next [-]

What about a company with high security reqs that do bot alloellms? Like gov type work.

stingraycharles 3 hours ago | parent | prev | next [-]

> Improving developer skills is not valuable to your company.

Yet every company does it, except the worst sweatshops.

titzer 3 hours ago | parent | prev | next [-]

The irony is that the vast deskilling that's happening because of this means that most "software engineers" will become incapable of understanding, let alone fixing or even building new versions of the systems that they are utterly dependent on.

There should be thousands or tens of thousands people worldwide that can build the operating systems, virtual machines, libraries, containers, and applications that AI is built on. But the number will dwindle and we'll ironically be unable to build what our ancestors did, utterly dependent on the AI artifacts to do it for us.

God I hope it doesn't all crash at once.

tuvang 2 hours ago | parent | next [-]

There is a deadly game of chicken going on. Junior recruiting already stopped for the most part. Only way this doesn’t end in a catastrophe is if AI becomes genuinely as good as the most skilled developers before we run out of them. Which I doubt very much but don’t find completely impossible.

theshrike79 2 hours ago | parent | next [-]

And the irony is that AI usage should make onboarding juniors easier.

Before it was "hey $senior_programmer where's the $thing defined in this project?", which either required a dedicated person onboarding or someone's flow was interrupted - an expected cost of bringing up juniors.

Now a properly configured AI Agent can answer that question in 60 seconds, unblocking the Junior to work on something.

And no, it doesn't mean Juniors or anyone else get to make 10k line PRs of code they haven't read nor understand. That's a very different issue that can be solved by slapping people over the head.

bragr an hour ago | parent [-]

The problem is that juniors given access to AI don't seem to learn as much. AI just gives them fish over and over instead of learning how to fish.

theshrike79 7 minutes ago | parent | next [-]

Yea, giving people a blank Claude with no setup will get you that.

What you could do is encourage (or force with IT's assistance) them to use a prompt (or hook or whatever) that refuses to do work for them, but instead telling them where to change and what without actually doing the work.

andrekandre 40 minutes ago | parent | prev [-]

  > The problem is that juniors given access to AI don't seem to learn as much.
i see this first-hand; they don't even know what they don't know so they circle over and over with ai leading them down rabbit holes and code that breaks in weird ways they cant even guess how to fix... stuff that if you were a real programmer you would have wrote in a few minutes let alone hours or days...
flir 2 hours ago | parent | prev [-]

Or if code quality stops mattering, in a kind of "ok, the old codebase is irretrievably sphagettified. Lets just have the chatbot extract all the requirements from it, and build a clean room version" kind of way. It's also not impossible we go that route.

turlockmike 2 hours ago | parent | prev | next [-]

How many kernel devs does the world need? A dozen or two?

It will be the same with software. AI will be writing and consuming most software. We will be utilizing experiences built on top of that, probably generated in real time for hyper personalization. Every app on your phone will be replaced by one app. (Except maybe games, at least for a short while longer).

Everyone's treating writing code as this reverent thing. No one wrote code 100 years ago. Very few today write assembly. It will become lost because the economic neccesity is gone.

It's the end of an era, but also the beginning of a new one. Building agentic systems is really hard, a hard enough problem that we need a ton of people building those systems. AI hardware devices have barely been registered, we need engineers who can build and integrate all sorts of systems.

Engineering as a discipline will be the last job to be automated, since who do you think is going to build all the worlds automation?

vdqtp3 16 minutes ago | parent [-]

> How many kernel devs does the world need? A dozen or two?

You're low by several orders of magnitude. "The 2025 development cycle saw 2,134 developers contribute to [Linux] kernel 6.18" [1]

[1] https://commandlinux.com/statistics/linux-kernel-contributor...

qsera 2 hours ago | parent | prev | next [-]

Trust me. All those people do it for the love of doing it, so I don't think they will outsource the jobs to some automation....

I have been coding long before internet and before there were huge demand for software devs..and I would be coding even after there is no demand for the same.

nicksergeant 3 hours ago | parent | prev | next [-]

I feel I've upskilled in so many directions (not just "ability to prompt LLMs") since going all in on LLM coding. So many tools, techniques, systems, and new areas of research I'd never have had the time to fully learn in the past.

I have a hard time believing any tenured developer is not actually learning things when using LLMs to build. They make interesting choices that are repeatable (new CLIs I didn't even know existed, writing scripts to churn through tricky data, using specific languages for specific tasks like Go for concurrently working through large numerous tasks, etc.)

Anyone not learning things via LLM coding right now either doesn't care at all about the underlying code/systems, or they had no foundational knowledge or interest in programming to begin with (which is also a valid way to use these tools, but they don't work very well without guidance for too long [yet]).

titzer an hour ago | parent | next [-]

Learning calculus by watching the professor solve integrals on the board for an hour doesn't result in the same level and depth of understanding as working through homeworks every week for a semester. If you ran off to your TA to solve every problem in your homework, you just won't learn calculus.

I've vibe coded plenty. I mostly don't look at the crap coming out. Don't want to. When I do I absorb a tiny bit, but not enough to recreate the thing from scratch. I might have a modicum more surface-level knowledge, but I don't have deep understanding and I don't have skills. To the extent that I've fixed or tweaked AI-generated code, it's not been to restructure, rearchitecture, or refactor. If this is all I did day in and day out, my entire skillset would atrophy.

nicksergeant an hour ago | parent [-]

"I mostly don't look at the crap coming out."

This is pretty much my point. I use LLMs to code _and_ to learn. I read everything that comes out. Half of it is wrong or incomplete. The other half saved me a bunch of time and taught me things.

Waterluvian 3 hours ago | parent | prev | next [-]

I think there's a considerable difference in its ability to help with breadth vs. depth of expertise.

tripledry 2 hours ago | parent | prev | next [-]

For me both are true at the same time.

I vividly remember understanding how calculus works after watching some 3blue1brown videos on youtube, but once I looked at some exercises I quickly realized I was not able to solve them.

Similar thing happens with LLMs and programming. Sure I understand the code but I'm not intimately familiar with it like if I programmed it "old school".

So yes, I do learn more but I can't shake the feeling that there is some dunning kruger effect going on. In essence I think that "banging my head against the wall" while learning is a key part of the learning process. Or maybe it's just me :D

mwigdahl 2 hours ago | parent [-]

It's not just you. I feel the same thing, and I saw it in practice helping my son study for a chemistry test just last night. He had worked through a bunch of problems by following the steps in his notes and got the right answers, but couldn't solve them without the notes because his comprehension of why he was taking all the steps wasn't solid.

Once we addressed that, he did great solo. Working the mechanics of the problems with the notes helped, but it was getting independent understanding of the reason for each step that put everything together for him.

agentultra an hour ago | parent | prev | next [-]

> Anyone not learning things via LLM coding right now either doesn't care at all about the underlying code/systems

How many bytes is a pointer in C? How many bytes is a shared pointer in C++? What does sysctl do? What about fsync?

What is a mutex lock? How is it different from a spin lock?

You want to find the n nearest points to a given point on a 2-D Cartesian plane. Could you write the code to solve that on your own?

Can you answer any of these questions without searching for the answer?

I don't use LLMs and I learn things fine. Always have. For several decades. I care deeply about the underlying code and systems. It annoys me when people say they do and they cannot even understand how the computer works. I'm fine with people having domain-specific knowledge of programming: maybe you've only been interested in web development and scripting DOM elements. But don't pretend that your expertise in that area means you understand how to write an operating system.

Or worse: that it prevents you from learning how to write an operating system.

You can do that without an LLM. There's no royal road. You have to understand the theory, read the books, read the code, write the code, make mistakes, fix mistakes, read papers, talk to other people with more experience than you... and just write code. And rewrite it. And do it all again.

I find the opposite is true: those who use LLM coding exclusively never enjoyed programming to begin with, only learned as much as they needed to, and want the end results.

nicksergeant an hour ago | parent [-]

Agree with pretty much everything you wrote here, I guess with the addendum that LLMs can be a part of the learning experience you're describing. It's as easy as telling the LLM "don't write a single line of code nor command, I want to do everything, your goal is to help me understand what we're doing here."

There are always going to be people who just want the end result. The only difference now is that LLM tools allow them to get much closer to the end result than they previously were able to. And on the other side, there are always going to be people who want to _understand_ what's happening, and LLMs can help accelerate that. I use LLMs as a personalized guide to learning new things.

zozbot234 2 hours ago | parent | prev | next [-]

What do you mean by "LLM coding"? That's not a very meaningful term, it covers everything from 100% vibe coded projects, to using the LLM to gradually flesh out a careful initial design and then verifying that the implementation is done correctly at every step with meticulous human review and checking.

nicksergeant an hour ago | parent [-]

The latter.

anovikov 2 hours ago | parent | prev [-]

This. I never had patience to figure how to build a from-scratch iOS app because it required too much boilerplate work. Now i do, and i got to enjoy Swift as a language, and learned a lot of iOS (and Mac) APIs.

JustResign an hour ago | parent [-]

But it isn't "from scratch", is it? It's "from Claude".

nicksergeant an hour ago | parent [-]

If you build a house from scratch but you didn't mill the lumber, did you build it from scratch?

If you make a pizza from scratch but you used canned sauce was it from scratch? What if you used store bought dough? What if you made the sauce and the dough but you didn't grow the tomato?

hnthrow0287345 2 hours ago | parent | prev | next [-]

>But the number will dwindle and we'll ironically be unable to build what our ancestors did, utterly dependent on the AI artifacts to do it for us.

That's only a brief moment in time. We learned it once, we can learn it again if we have to. People will tinker with those things as hobbies and they'll broadcast that out too. Worst case we hobble along until we get better at it. And if we have to hobble along and it's important, someone's going to be paying well for learning all of that stuff from zero, so the motivation will be there.

Why do people worry about a potential, temporary loss of skill?

doctorwho42 2 hours ago | parent | next [-]

Because they may have studied history... There are countless examples of eras of lost technology due to a stumble in society. Where those societies were never able to recover the lost "secrets" of the past. Ultimately, yes, humans can rediscover/reinvent how to do things we know are possible. But it is a very real and understandable concern that we could build a society that slowly crumbles without the ability to relearn the way to maintain the systems it relies upon, fast enough to stop it from continued degradation.

Like, yeah, you have the resources right now to boot strap your knowledge of most coding languages. But that is predicated on so many previous skills learn through out your life, adulthood and childhood. Many of which we take for granted. And ultimately AI/LLM's aren't just affecting developers, they are infecting all strata of education. So it is quite possible that we build a society that is entirely dependent on these LLM's to function, because we have offloaded the knowledge from societies collective mind... And getting it back is not as simple as sitting down with a book.

hnthrow0287345 2 hours ago | parent [-]

And we're still here right? We have more books and knowledge and capabilities than ever. Despite theoretically losing knowledge along the way, we're okay (mostly).

Society can replace the systems it relies on. The replacement might not be the best, but it'll probably handle things until we can reinvent a newer, better system. It probably won't be easy, but you can't convince me that humanity suddenly cannot adapt and fix problems right in front of them. How long does history have us doing that?

These are extraordinary claims that all of society will just become dumb and not be able to do any of this. History is also littered with people fretting about the next generation not being smart enough or whatever, and those fears rhyme pretty closely with what we're talking about here.

Tomis02 37 minutes ago | parent [-]

You could have lived 200 years. But instead, people decided they'd rather invest in crypto or LLMs instead.

Maybe humans will still be here in a century. But you won't be. It didn't have to be this way.

bit-anarchist 11 minutes ago | parent [-]

I don't see how they are actually exclusive in the long-term. Crypto investment isn't that big, and LLMs, or AI in general, may provide support for better treatments, thus possibly allowing people to reliably live onto 200 years.

Waterluvian 2 hours ago | parent | prev | next [-]

I imagine it being a "does anybody know COBOL?!" but much sooner than sixty years rom now.

RhysU 2 hours ago | parent [-]

COBOL also came to mind.

The COBOL thing seems to be working out just fine last I heard. Today a small number of people get paid well to know COBOL's depths and legacy platforms/software. The world moved on, where possible, to lower cost labor and tools.

Arguably, that outcome was the right creative destruction. Market economics doesn't long-term incentivize any other outcomes. We'll see the arc of COBOL play out again with LLM coding.

jerf an hour ago | parent [-]

I've been waiting for the article talking about how AI is affecting COBOL. Preferably with quotes from actual COBOL programmers since I can already theorize as well as the next guy but I'm interested in the reports from the field.

While LLMs have become pretty good at generating code, I think some of their other capabilities are still undersold and poorly understood, and one of them is that they are very good at porting. AI may offer the way out for porting COBOL finally.

You definitely can't just blindly point it at one code base and tell it to convert to another. The LLMs do "blur" the code, I find, just sort of deciding that maybe this little clause wasn't important and dropping it. (Though in some cases I've encountered this, I sometimes understand where it is coming from, when the old code was twisty and full of indirection I often as a human have a hard time being sure what is and is not used just by reading the code too...) But the process is still way, way faster than the old days of typing the new code in one line at a time by staring at the old code. It's definitely way cheaper to port a code base into a new language in 2026 than it was in 2020. In 2020 it was so expensive it was almost always not even an option. I think a lot of people have not caught up with the cost reductions in such porting actions now, and are not correctly calculating that into their costs.

It is easier than ever to get out of a language that has some fundamental issue that is hard to overcome (performance, general lack of capability like COBOL) and into something more modern that doesn't have that flaw.

FpUser 2 hours ago | parent | prev [-]

>"That's only a brief moment in time. We learned it once, we can learn it again if we have to. "

Yes we can but there is a big problem here. We will "learn it again" after something breaks. And the way the world currently functions there might not be a time to react. It is like growing food on industrial scale. We have slowly learned it over the time. If it breaks now with the knowledge gone and we have to learn it again it will end the civilization as we know it.

hnthrow0287345 2 hours ago | parent [-]

>It is like growing food on industrial scale.

How many people do you think know how to do that today? It's in the millions (probably 10s to 100s), scattered all across the globe because we all need to eat. Not to mention all of the publications on the topic in many different languages. The only credible case for everyone forgetting how to farm is nuclear doomsday and at that point we'll all be dead anyway.

>If it breaks now with the knowledge gone and we have to learn it again it will end the civilization as we know it.

I don't think there is a single piece of technology that is so critical to civilization that everyone alive easily forgets how to do it and there is also zero documentation on how it works.

These vague doomsday scenarios around losing knowledge and crashing civilization just have zero plausibility to me.

kingkawn 2 hours ago | parent | prev | next [-]

If a catastrophic failure occurs we will have to return to first principles and re-derive the solutions. Not so bad, probably enlivening even to get to spin up the mind again after a break.

cdetrio an hour ago | parent [-]

We found 500 zero-days in ten year old widely used open-source projects. Was that not a demonstration of the catastrophic failure of human debugging capability?

anon291 an hour ago | parent | prev [-]

I mean there should be. But there's not. Despite the millions of CS grads produced many people could not reasonably be expected to produce many 'standard' parts of a software stack

qsera 2 hours ago | parent | prev [-]

> I got to do my hobby as a career for the past 15 years, but that’s ending.

Frankly I don't think so. The AI using LLMs is the perpetual motion mechanism scam of our time. But it is cloaked in unimaginable complexity, and thus it is the perfect scam. But even the most elaborately hidden power source in a perpetual motion machine cannot fool nature and should come to a complete stop as it runs out.

Waterluvian 2 hours ago | parent | next [-]

I love the perpetual motion machine / thermodynamics analogy.

It kind of feels like companies are being fooled into outsourcing/offshoring their jr. developer level work. Then the companies depend on it because operational inertia is powerful, and will pay as the price keeps going up to cover the perpetual motion lie. Then they look back and realize they're just paying Microsoft for 20 jr. developers but are getting zero benefit from in-house skill development.

colechristensen 2 hours ago | parent | prev [-]

This is silly. I can build products in a weekend that would take me a year by myself. I am still necessary 1% of the time for debug, design, and direction and those of not at all a shallow skill. I have some graduate algebra texts on the way my math friend is guiding me through because I have found a publishable result and need to shore up my background before writing the paper...

It's not perpetual motion, it's very real capability, you just have to be able to learn how to use it.

qsera 2 hours ago | parent | next [-]

No one is saying that it cannot do what you say now.

What I am saying is that once the high quality training data runs out, it will drop in its capabilities pretty fast. That is how I compare it to perpetual motion mechanism scams. In the case of a perpetual motion machine, it appear that it will continue to run indefinitely. That is analogous to the impression that you have now. You feel that this will go on and on for ever, and that is the scam you are falling for.

WarmWash an hour ago | parent | next [-]

>What I am saying is that once the high quality training data runs out, it will drop in its capabilities pretty fast.

That's more a misunderstood study that over time turned into a confidently stated fact. Yes, the model collapses if you loop the output to the input. But no, that's not how it's done.

The reality is that all the labs are already using synthetic training data, and have been for at least a year now. It basically turned out to be a non-issue if you have robust monitoring and curation in place for the generated data.

qsera an hour ago | parent [-]

>using synthetic training data

yea, look up how it is done.

It is exactly how a perpetual motion machine scam would project an appearance of working like using a generator to drive a motor, and the motor driving the generator..something that would obscure the fact that there is energy loss happening along the way....

WarmWash 14 minutes ago | parent [-]

I'm confused with the point you are trying to make, because they are using synthetic data, and the models are getting stronger.

There is no "conservation of fallacy" law (bad data must conserve it's level of bad), so I'm struggling to connect the dots on the analogy, unless I ignore the fact that training on synthetic data works, is being used, and the models are getting better.

_aavaa_ 2 hours ago | parent | prev [-]

Why would the capabilities drop instead of stagnate?

qsera 2 hours ago | parent [-]

Because technologies, programming languages, best practices, won't stay frozen. If LLMs cannot catch up with it, I think it can be considered as a drop in capability. No?

coldtea 2 hours ago | parent [-]

Close, but no. What will happen is that "technologies, programming languages, best practices" will stay frozen because human innovation will drop, and the whole field will stagnate.

californical 9 minutes ago | parent [-]

This is the biggest fear! I don’t see an easy fix.

Will the developer of a new programming language be able to reach out to model companies to give a huge amount of training data, ensuring that the models are good at that new language? I don’t think a small team can write enough code, the models already struggle in medium-popularity languages that have years of history. They hallucinate lua functionality sometimes, for example, even though I’m sure there is lots of lua code out there.

So if most people use coding agents, we’re stuck with the current most popular languages because no new language will get past the barrier of having enough code that models can write it well, meaning nobody adopts the new language, etc.

Same thing with libraries and frameworks - technical decisions are already being made based on “is this popular enough that the agents can use it well?” Rather than a newer library that meets our needs perfectly but isn’t in the training data

askafriend 2 hours ago | parent | prev | next [-]

You can see their ego trying to protect itself.

coldtea 2 hours ago | parent | prev | next [-]

>This is silly. I can build products in a weekend that would take me a year by myself

Is the world any better for them existing? The decline of coding and sw engineering skills in humans from outsourcing the practice of it to AI is it worth it and sustainable long term?

colechristensen 18 minutes ago | parent [-]

>Is the world any better for them existing? The decline of coding and sw engineering skills in humans from outsourcing the practice of it to AI is it worth it and sustainable long term?

The world is going to be no worse than it was when humans transitioned from writing assembly to writing compilers for high level languages. Assembly is still necessary, but not that often. In the same way writing code is going to become less necessary as tools are going to be written in higher level language in standards and requirements documents instead of code most of the time, with more specific exact coding only occasionally.

Programmers were mostly solving the same plumbing problems over and over in secret because of "proprietary" needs to hide your code, but one million separate integrations of your billing backend with Stripe didn't really add to humanity. We're cutting out the boring middle drudgery and human effort is going to be freed up to work on the edges of human knowledge instead of tromping around in the middle.

tpdly 2 hours ago | parent | prev [-]

You're fooling yourself.

People yeating a (shitty) Github clone with Claude in a week apparently can't imagine it, but if you know the shit out of Rails, start with a good a boiler plate, and have a good git library, a solo dev can also build a (shitty) Github clone in a week. And they'll be able to take it somewhere, unlike the llm ratsnest that will require increasingly expensive tokens to (frustratingly) modify.

mikkupikku 2 hours ago | parent | next [-]

You're fooling yourself. It's very easy to get demonstrably working results in an afternoon that would take weeks at least without coding agents. Demonstrably working, as in you can prove the code actually works by then putting it to use. I had a coding agent write an entire declarative GUI library for mpv userscripts, rendering all widgets with ASS subtitles, then proceeded to prove to my satisfaction that it does in fact work by using it to make a node editor for constructing ffmpeg filter graphs and an in-mpv nonlinear video editor. All of this is stuff I already knew how to do in practice, had intended to do one day for years now, but never bit the bullet because I knew it would turn into weeks of me pouring over auto-generated ASS doing things it was never intended to do to figure out why something is rendering subtly wrong. Fairly straightforward but a ton of bitch work. The LLM blasted through it like it was nothing. Fooling myself? The code works, I'm using it, you're fooling yourself.

bachmeier an hour ago | parent | next [-]

> Fairly straightforward but a ton of bitch work. The LLM blasted through it like it was nothing.

One might argue that this is a substitute for metaprogramming, not software developers.

trollbridge 42 minutes ago | parent [-]

It's interesting more people haven't talked about this. A lot of so-called agentic development is really just a very roundabout way to perform metaprogramming.

At my own firm, we generally have a rule we do almost everything through metaprogramming.

zozbot234 2 hours ago | parent | prev [-]

> Demonstrably working, as in you can prove the code actually works by then putting it to use.

That's not how you prove that code works properly and isn't going to fail due to some obscure or unforessen corner case. You need actual proof that's driven by the code's overall structure. Humans do this at least informally when they code, AI's can't do that with any reliability, especially not for non-trivial projects (for reasons that are quite structural and hard to change) so most coding agents simply work their way iteratively to get their test results to pass. That's not a robust methodology.

coldtea 2 hours ago | parent [-]

>That's not how you prove that code works properly and isn't going to fail due to some obscure or unforessen corner case.

So? We didn't prove human code "isn't going to fail due to some obscure or unforessen corner case" either (aside the tiny niche of formal verification).

So from that aspect it's quite similar.

>so most coding agents simply work their way iteratively to get their test results to pass. That's not a robust methodology.

You seem to imply they do some sort of random iteration until the tests pass, which is not the case. Usually they can see the test failing, and describe the issue exactly in the way a human programmer would, then fix it.

zozbot234 an hour ago | parent [-]

> describe the issue exactly in the way a human programmer would

Human programmers don't usually hallucinate things out of thin air, AIs like to do that a whole lot. So no, they aren't working the exact same way.

coldtea an hour ago | parent [-]

>Human programmers don't usually hallucinate things out of thin air

Oh, you wouldn't believe how much they do that too, or are unreliable in similar ways. Bullshiting, thinking they tested x when they didn't, misremembering things, confidently saying that X is the bottleneck and spending weeks refactoring without measuring (to turn out not to be), the list goes on.

>So no, they aren't working the exact same way.

However they work internally, most of the time, current agents (of say, last year and above) "describe the issue exactly in the way a human programmer would".

qsera an hour ago | parent [-]

That is not hallucinating...

LLM hallucinating is not an edge case. It is how they generate output 100% time. Mainstream media only calls it "hallucination" when the output is wrong, but from the point of view of a LLM, it is working exactly it is supposed to....

coldtea 31 minutes ago | parent [-]

>LLM hallucinating is not an edge case. It is how they generate output 100% time

If enough of the time it matches reality -- which it does, it doesn't matter. Especially in a coding setup, where you can verify the results, have tests you wrote yourself, and the end goal is well defined.

And conversely, if a human is a bullshitter, or ignorant, or liar, or stupid, it doesn't matter if they end up with useless stuff "in a different way" than an LLM hallucinating. The end result regarding the low utility of his output is the same.

Besides, one theory of cognition (pre LLM even) is of the human brain as a prediction machine. In which case, it's not that different than an LLM in principle, even if the scope and design is better.

colechristensen 24 minutes ago | parent | prev [-]

I also did a native implementation of git so I could use an S3 compatible data store, your rails guru can't do that.

Objectively, my GitHub clone is still shitty, BUT it got several ways github is shitty out of my way and allowed me to add several features I wanted, no small one of which was GitHub not owning my data.

I don't know the shit out of Rails and I don't want to, I know the shit out of other things and I want the tools I'm using to be better and Claude is making that happen.

It's a little odd the skepticism to the level that people keep telling me I'm delusional for being satisfied that I've created something useful for myself. The opposition to AI/LLMs seems to be growing into a weird morality cult trying to convince everybody else that they're leading unhappy immoral lives. I'm exaggerating but it's looking like things are going in that direction... and in my house, so to speak, here on HN there are factions. Like programming language zealots but worse.

bitmasher9 3 hours ago | parent | prev | next [-]

Picking out my favorite idea out of many: we do need ways to stay mentally sharp in the age of AI. Writing and publishing is a good one. I also recommend stimulating human conversations and long-form reading.

More and more the bar is being lowered. Don’t fall to brain rot. Don’t quite quit. Stay active and engaged, and you’ll begin to stand out among your peers.

cfiggers 3 hours ago | parent | next [-]

> we do need ways to stay mentally sharp in the age of AI.

Here's my advice: if there's someone around you who can teach you, learn from them. But if there isn't anyone around you who can teach you, find someone around you who can learn from you and mentor them. You'll actually grow more from the latter than from the former, if you can believe that.

I think there's a broad blindness in industry to the benefits of mentorship for the mentors. Mentoring has sharpened my thinking and pushed me to articulate why things are true in a way I never would have gone to the effort of otherwise.

If there are no juniors around to teach, seniors will forever be less senior than they might have been had they been getting reps at mentorship along the way.

theshrike79 2 hours ago | parent | next [-]

A long-standing truth in martial arts circles has been that you can't advance beyond a certain belt before you teach classes.

It's purely because of the fact that if you can't teach something, you really don't understand it.

And the act of having to simplify and break down a skill to explain it to others improves your knowledge of it.

efromvt 2 hours ago | parent | prev [-]

I haven't heard this benefit for mentors clearly articulated before (probably just missed it), but definitely felt it - I guess it's a deeper version of how writing/other communication forces clarity/organization of thoughts because mentorship conversations are so focused on extracting the why as well as the what.

ramon156 3 hours ago | parent | prev | next [-]

I can confidently say that, yes, reading helps a lot. My mental model has shifted a bit that words are cheap (printing -> writing -> typing -> generating) and that we should accept there is something like high quality text.

I haven't really been a reader, but I can definitely notice when a book/text is "hard". I'm currently reading the old testament, and I understand very little (even the oxford one that has a lot of annotations is hard for me). I like this, because its a measurement of what I don't know (if that makes sense).

CoastalCoder 3 hours ago | parent | next [-]

For the first time in quite a while, I've started reading a challenging, non-computer book ("The New Testament in its World").

I'm trying to decide if my attention span has atrophied, or if I'm just more aware now of my ADD.

Either way, I'm hopeful that my attention span for this kind of reading will grow with practice.

AnimalMuppet 3 hours ago | parent | next [-]

I too have noticed my attention span having atrophied. It was pre-AI, at least for me. Post-internet, though.

rkomorn 2 hours ago | parent [-]

I think browser tabs and screen (the terminal multiplexer) did it for me.

tayo42 2 hours ago | parent | prev [-]

If you you haven't read a book in a while, I noticed it's like a thing you need to practice.

haspok 2 hours ago | parent | prev [-]

I tried reading Proust's In Search Of Lost Time some time ago, in which the first 10-20 pages are about a guy lying in his bed at night and observing his own thoughts (roughly). And I quickly realised how I was reading the words and even sentences, but couldn't grasp the meaning of them - I couldn't produce a "mental model" or image of what it was about. It was a very humbling experience.

I used to be an avid reader as a child, even as a teenager. That was a long time ago. I'm looking forward to that time when I will have the mental capacity to read long prose again.

DiscourseFan 2 hours ago | parent | prev | next [-]

There are many things the AI can't do.

cyanydeez 3 hours ago | parent | prev | next [-]

I'm pretty sure all this AI is built on top of Silicon valley's technobabble of "permanent underclass" which seems to have zero introspection as to why we're just going to accept the feudal overlords of technology.

But besides that, it's interesting so many people are willing to tailor their entire workflow and product to indeterminate machines and business culture.

I recommend everyone stop using these infernal cloud devices and start with a nice local model that doesn't instantly give you everything, but is quite capabable of removing a select amount of drudgery that is rather relaxing. And as soon as you get too lazy to do enough specifying or real coding, it fucks up your dev environment and you slap yuorself a hundred times wondering why you ever trusted someone else to properly build your artifaces.

There's definitely some philosophy being edged into our spaces that need to be combatted.

flir 3 hours ago | parent | next [-]

I'm pretty sure the -as-a-service stage is only temporary.

The local models are only going to get better, and the improvement curve has to top out eventually. Maybe the cloud models will still give you a few extra percentage points of performance, especially if they're based on data sets that aren't available to the public, but it won't make much difference on most tasks and the local models will have a lot of advantages too.

cyanydeez 2 hours ago | parent [-]

It's definitely not temporary from POV the billionaires trying to carve out a worker-free lifestyle.

EdgeNRoots 3 hours ago | parent | prev | next [-]

I agree on the over-reliance part, but I don’t think it’s AI itself .It’s how people choose to use it.

Most people are outsourcing thinking instead of using it to go deeper. The tools aren’t the problem, the default behavior is.

mday-edamame 3 hours ago | parent [-]

True, but the tools make the default behavior so tempting.

I have a friend who uses Google Maps to find places, then memorizes the route there and closes the app to navigate because he wants to build a better mental map of our city. Meanwhile, I just check the app every five seconds like a dummy, and my hippocampus stays small.

draxil 2 hours ago | parent | next [-]

This is a good parallel. In the 90s when I learned to drive I was quite good at navigating. Now google maps is on a screen in my car telling me where to go whenever I drive beyond my most common routes.

Really all the research telling us about AI skills atrophy.. We should have guessed from previous experience.

guzfip 2 hours ago | parent [-]

Old people my entire life have made fun of younger people for “not being able to read maps” or something.

But I’ve never seen anyone follow a GPS so religiously into so many obvious dead ends than elderly Uber drivers.

qsera 2 hours ago | parent | prev [-]

Your friend use google maps, while google maps uses you.

guzfip 3 hours ago | parent | prev [-]

> which seems to have zero introspection as to why we're just going to accept the feudal overlords of technology.

You’ve let them in and given them power in many aspects of your life without even a whimper of resistance. Of course you’ll accept them as your lords.

sodapopcan 2 hours ago | parent | prev | next [-]

Or, you know, writing some code every day.

keybored 3 hours ago | parent | prev [-]

Do you want a Stairmaster with that elevator? Life is for living, ostensibly. This Inevitabilism drone choir[1] may be correct that it will take my current job and after that maybe there will nothing fruitful in that department left. But I can’t imagine a life situation where I’m both surviving and using thinking-with-my-brain as some retirement home pastime/ “brainrot”-preventer.

> Stay active and engaged, and you’ll begin to stand out among your peers.

Here’s how the rat race looks in the age of AI and how you can stay ahead.

[1] https://news.ycombinator.com/item?id=47487774

jpfromlondon 3 hours ago | parent [-]

hoped for something useful in your link, found drivel.

keybored 2 hours ago | parent | next [-]

Given your shattered hope and the fact that you came to it from the same author must have meant that something in this latest comment appealed to you. Sorry to disappoint! Can I interest you in some of my other musings instead? To salvage that hope of yours.

jpfromlondon 2 hours ago | parent [-]

Oh absolutely, I'll have a poke around.

For the record I'm not an ai doomer, but I am pragmatic, and the lack of hope is merely a foundation.

RALaBarge 2 hours ago | parent | prev [-]

its drivel all the way down, act accordingly

Thanemate 3 hours ago | parent | prev | next [-]

Funnily enough I saw this post as I was placing my HN account on hiatus, because I'm tired pretending that the quality of discourse is on par with what I've been used to read and participate in.

We're obviously in an era where "good enough" is taken so far that, what used to be the middle of the fictional line is not the middle point anymore but a new extreme. You're either someone who cares for the output or someone who cares how readable and easy to extend the code is.

I can only assume this is done on hopeful purpose, with the hope that the LLM's will "only keep improving linearly" to the point where readability and extendability is not my problem by it's "tomorrow's LLM" problem.

inanutshellus 2 hours ago | parent | next [-]

Ok but if you're a person that likes HN discourse but thinks "eternal september" has happened ... what's your plan?

You'll still come here, read the comments, see something engaging and want to reply and... feel sad because shakes fist at [datacenter] clouds it's all just bots talking to each other anyway.

Seems lame. Keep talking anyway.

latexr 2 hours ago | parent | next [-]

You’re making a lot of assumptions. They could just stop visiting HN. They don’t even need a “plan” or an alternative, they can just stop.

7777332215 2 hours ago | parent | prev [-]

I thought the same as the person you replied to. For me, the solution is to stop coming here as often and instead read traditional literature.

Soon to remove my access entirely to this website.

moron4hire 2 hours ago | parent | prev [-]

There is a lot more "yngmi" and "have fun being poor"-style attitude around here regarding LLM boosterism.

trollbridge 28 minutes ago | parent [-]

That attitude is particularly galling. Along with the "lock in now or become part of the permanent underclass".

malwrar 3 hours ago | parent | prev | next [-]

I do find it hard to tolerate the feeling of being watched online. The second-most trending dataset on huggingface right now is a snapshot of HN updating at a 5 minute interval. It makes me not want to really comment at all, just like how I don’t really publish any software I write anymore.

Turns out it sucks to produce original works when you know that, whereas previously a few people at best might see your work, now it’s a bunch of omniscient robots and maybe half of those original people are using the robots instead.

simianwords 8 minutes ago | parent | next [-]

HN always offered the data to anyone, what changes now? How does it matter if it is LLM's that is consuming your data. What a strange attitude.

niek_pas an hour ago | parent | prev | next [-]

This is really interesting to me, because it never occurred to me to feel this way. Why would I care whether my comments are ending up in some dataset somewhere that's being used to train some model? My comments are boring and mostly uninformed. Have at it.

I'm curious: would you say the feeling of being watched online is making you afraid of some repercussion, or is it something else?

TeMPOraL an hour ago | parent | next [-]

Dog in the Manger.

I get a feeling from overall anti-AI sentiment online that a lot of people feel they're entitled to 100% of value created by anything even tangentially related to their person, whether that's some intentional contribution or a random brain fart that happened in the vicinity of someone else doing something useful - and then become resentful they're not "getting their share".

There's hardly any other way to read all the proclamations of quitting to do anything because "cognitive dark forest" (itself a butchering of the original idea of "dark forest" across so many orthogonal dimensions in parallel, that it starts to look like a latent space of a transformer model).

chromacity an hour ago | parent [-]

Conversely, some people feel entitled to 100% of the value created by others. Oh, you wrote a book? Too bad, it's a part of my training data set now.

Downloading public stuff off the internet with no regard for the creator's wishes or license is bad enough, but we have many people here who defended AI companies seeding models with pirated content.

The internet is a social contract. AI is not the first thing to try and erode it for profit, but it's by far the most aggressive one.

malwrar an hour ago | parent | prev [-]

There’s definitely a fear of repercussions (I’ve been commenting on this site for over a decade now! Who knows what’s in my history...) but importantly I actually take some pride in many of the comments I write. What drew me to this site originally was how high quality everyone’s perspectives and articulation was, and I suppose I view the writing voice I’ve nurtured here as unique and special to me. It’s not about compensation, I’d just hate to see some future chatbot sound 1/1,000,000th like me I guess? Hard feeling to describe, but I’d rather just not be globbed in and instead express myself in ways that aren’t profitable or feasible to copy.

philipwhiuk 3 hours ago | parent | prev [-]

I think the immediate term action is to viciously block all crawlers.

Writing a blog yes, feeding the beast no.

ArcHound 3 hours ago | parent | next [-]

This sounds like a nice principled stance, but you won't get any traffic with this approach. That's demotivating - to me blogging is a tight balance of exploration, learning, improving and feedback. I'm not able to write without considering how this impacts the reader - removing all readers breaks the process for me.

lstodd 2 hours ago | parent | prev [-]

Yeah, everyone went on "blocking all crawlers" end result being half of internet inaccessible over vpns. Good job, people.

kstenerud 3 hours ago | parent | prev | next [-]

> The giant plagiarism machines have already stolen everything. Copyright is dead. Licenses are washed away in clean rooms.

Isn't this what the free software movement wanted? Code available to all?

Yes, code is cheap now. That's the new reality. Your value lies elsewhere.

You can lament the loss of your usefulness as a horse buggy mechanic, or you can adapt your knowledge and experience and use it towards those newfangled automobiles.

probably_wrong an hour ago | parent | next [-]

> Isn't this what the free software movement wanted? Code available to all?

But this is not that. The current situations is closer to "what's yours is mine and what's mine is mine".

I have been releasing my writings under a Creative Commons Attribution-ShareAlike license which requires attribution and that anything built upon the material to be distributed "under the same license as the original". And yet I have no access to OpenAI's built-upon material (I know for a fact they scrape my posts) while they get my data for free. This is so far legal, but it's probably not ethical and definitely not what the free software movement wanted.

lmm 3 hours ago | parent | prev | next [-]

> Isn't this what the free software movement wanted? Code available to all?

Available to all yes. Not available to the giant corpos while the lone hobbyist still fears getting sued to oblivion. In fact that's pretty much the opposite of what the free software movement wanted.

Also the other thing the free software movement wanted was to be able to fix bugs in the code they had to use, which AI is pulling us further and further away from.

mmustapic 3 hours ago | parent | prev | next [-]

No, the free software movement wants that the source code of the software you use be available to you to modify it if you wish. AI does not necessarily do that.

kstenerud 3 hours ago | parent [-]

AI makes the entirety of the software engineering profession available to you. All you have to do is ask the right way, and you can build in days what once took months or years.

Decompiling and re-engineering proprietary code has never been easier. You almost don't even need the source code anymore. The object code can be examined by your LLM, and binary patches applied.

Closed source is no longer the moat it was, and so keeping the source code to yourself is only going to hurt you as people pass you over for companies who realize this, and strive to make it easier for your LLM to figure their systems out.

mmustapic 2 hours ago | parent | next [-]

But I can't have the weights of the LLM model I'm using for this.

Arkhaine_kupo 2 hours ago | parent | prev [-]

> Decompiling and re-engineering proprietary code has never been easier. You almost don't even need the source code anymore. The object code can be examined by your LLM, and binary patches applied.

Jesus christ.

"The people who wanted everyone to have a home should be happy with the invention of the lockpick. You can just find a nice house and open the lock and move in. Ignore the lockpick company charging essentially whatver they want for lockpicks or how they got accesss to everyones keyfob, or the danger of someone breaking into your house"

That is basically your argument. Like AI is a copyright theft machine, with companies owning the entire stack and being able to take away at will, and comitting crimes like decompiling source code instead of clean room is not a selling point either...

The open source community wants people to upskill, people become tech literate, free solutions that grow organically out of people who care, features the community needs and wants and people having the freedom to modify that code to solve their own circumstances.

Supermancho 2 hours ago | parent [-]

> That is basically your argument. Like AI is a copyright theft machine, with companies owning the entire stack and being able to take away at will, and comitting crimes like decompiling source code instead of clean room is not a selling point either...

Stop trying to make this into some abstract argument. It's not an argument anymore. It's already happened.

How one might choose to characterize the reality, is irrelevant. A vast (and growing) amount of source code is more open, for better or worse. Granted, this is to the chagrin of subgroups that had been pushing different strategies.

simoncion 2 hours ago | parent | next [-]

> It's already happened.

Agreed.

> Stop trying to make this into some abstract argument.

As you mentioned, it's not an abstract argument. It's statements of fact.

> A vast (and growing) amount of source code is more open...

No, not at all.

1) If you honestly believe that major tech companies will permit both copyright- and license-washing of their most important proprietary code simply because someone ran it through an LLM, you're quite the fool. If someone "trained" an LLM on -say- both Windows 11 and ReactOS, and then used that to produce "ReactDoze" while being honest about how it was produced, Microsoft would permanently nail them to the wall.

2) The LLMs that were trained on the entirety of The Internet are very, very much not open. If "Open"AI and Anthropic were making available the input data, the programs and procedures used to process that data, and all the other software, input data, and procedures required to reproduce their work, then one could reasonably entertain the claim that the system produced was open.

kstenerud an hour ago | parent [-]

This is looking at the current situation through the old lens.

That ship has sailed. The revolution is happening. We live in a new reality now, one where we're still trying to figure out what rules should even be.

And there will be winners and losers, and copyright and patent law will be modified in an attempt to tame the chaos, with mixed results because of all of the powerful players on both ends.

You can live on the front of it for high risk/reward, or at the back for safety. But either way, you're going to exist in this new reality and you need to decide your risk appetite.

simoncion 28 minutes ago | parent [-]

Your set of statements and their surrounding context reminds me very much of the mass grave scene in Kubrick's Vietnam War movie Full Metal Jacket: <https://www.youtube.com/watch?v=670Y3ehmU74>

Arkhaine_kupo 2 hours ago | parent | prev [-]

> Stop trying to make this into some abstract argument. It's not an argument anymore. It's already happened.

yes and lockpicks also exist. Promotting the ability to break into homes when people are talking about the housing crisis is a crazy, short sighted and frankly embarrasing position to take.

And mischaracterising the people in the open source community as belonging to that ideology is insulting.

> A vast (and growing) amount of source code is more open

You are missusing the word open here, for accesible. Having an open house, and breaking into someone's home are not the same thing, even if the door ends up open either way.

> Granted, this is to the chagrin of subgroups that had been pushing different strategies.

Taking unethical shortcuts that ultimately lead to an even worse outcome is not a cause of chagrin, its a cause of deep and utter terror and embarrasment.

Wanting people to own their skills and tech stack and be informed, smart and engaged is a goal that "just ask the robot you dont control to break into a corporate codebase and copy it" is not even remotely close to helping get close to.

sdevonoes 3 hours ago | parent | prev | next [-]

Progress is good. But why on earth should we support Anthropic/OpenAI/etc? What the planet needs is less multibillion corporations, not more

kstenerud 3 hours ago | parent | next [-]

You don't have to. Just like you don't have to support Amazon for web services and file stores.

Or Oracle for databases.

Or Microsoft for operating systems.

Or DEC for computers.

There are perfectly good open source LLMs and agents out there, which are getting better by the day (especially after the recent leak!)

farfatched 3 hours ago | parent | prev [-]

I want to support local models and compute over SaaS models.

I want to support RISC V over Intel.

I want other things too, and on balance, Intel+Anthropic is most compliant with my various preferences, even if they're not perfect.

toofy 21 minutes ago | parent | prev [-]

i can say with a pretty high confidence level that few people in the free software movement want the closed off black boxes these companies are locking away.

they’re not free in any sense of the word. from price to openness of the models. would openai cry if every bit of their models were wide open for us to use however we see fit? if so, then it’s not free, again, in any definition of the word.

staminade 2 hours ago | parent | prev | next [-]

Anti-AI articles like this seem to be the new "Doing my part to resist big tech: Why I'm switching back from Chrome to Firefox" genre that popped up on HN for a decade or so. If it makes you feel better, great, but don't kid yourself that your actions will make any difference whatsoever to the overall trajectory of AI adoption in IT or society.

beej71 an hour ago | parent | next [-]

I love it if it would affect the trajectory, but I don't think it will. I do think it will affect my trajectory, though.

raincole 15 minutes ago | parent | prev | next [-]

This genre has always been very prevalent on HN. Move from cloud to on-premise. Move away from US-based services. Move away from Gmail. Move away from Github.

simianwords 12 minutes ago | parent [-]

Was there one for memory managed languages like C# vs self managed like C/C++?

raincole 6 minutes ago | parent [-]

Rust.

simianwords 2 minutes ago | parent [-]

bit of an edge case because the support was not for the incumbent

dmix 2 hours ago | parent | prev [-]

Plus many of these articles seem maximized to attract attention on social media, which is its own machine.

Posting your most provocative and strong opinions in reaction to the latest controversy-of-the-week is what fuels the internet and culture more than anything these days. The attention economy demands hot takes mixed with preaching about every new thing.

simianwords 8 minutes ago | parent [-]

see: purity spiral politics

farfatched 3 hours ago | parent | prev | next [-]

> The AI industry is 99% hype; a billion dollar industrial complex to put a price tag on creation. At this point if you believe AI is ‘just a tool’ you’re wilfully ignoring the harm.

> (Regardless, why do I keep being told it’s an ‘extreme’ stance if I decide not to buy something?)

> The 1% utility AI has is overshadowed by the overwhelming mediocracy it regurgitates.

This sort of reasoning is why you might have been called extreme.

It's less extreme to say "many people see and/or get lots of benefit, but it's wrong to use the tool due to the harms it has".

There's nothing wrong with extreme, but since you asked.

xnorswap 3 hours ago | parent [-]

Yes, declaring AI to be 99% hype just turns away people like me from what the author has to say.

I was an AI sceptic for a long time until toward the end of last year when I seriously evaluated them, and came to realise it could add tremendous value.

When someone comes along and declares that it's all hype, it goes against my experience that it's getting things done.

I can also see the harm it does, and I hope the tooling improves to reduce that harm. For example, there's a significant lack of caching in the tooling. It's constantly re-reading the same files every day, and more harmfully, constantly fetching the same help pages and blog-posts from the web.

If it had a generous built in HTTP cache, and instruction to maximise use of the cache, then it could avoid a lot of re-fetching of content, which would help reduce the harms.

Declaring my experience to be invalid and based on nothing but hype doesn't engage people like me at all.

And it's the people like me, the middle-of-the-road developer working on enterprise software, that either need convincing to not use the tools, or for our habits to change to minimise the harm.

Because otherwise we're quietly getting on with using it, potentially destroying forests and lakes as we do.

kasey_junk 3 hours ago | parent | next [-]

It’s worse than that, in the linked “I’ve done my research” they make the tired claim that ai hallucinates api calls. Which while true has not been a practical problem since tool calling was added.

I think the position that ai is morally troubling enough that the downsides out way the positives is perfectly defensible. But the entire argument becomes a joke when you can’t accurately catalog the positives.

thepasch 3 hours ago | parent | next [-]

At this point, I’m pretty sure saying “I’ve done my research” is more of an indicator that someone hasn’t done their research but would like to be taken seriously anyway by pretending they did. The kind of person who’s both smart enough to realize that an issue might be more nuanced than they present it, as well as intellectually dishonest enough to… not care.

draxil 2 hours ago | parent | prev [-]

I think the fact you need tool calling to stop it doing that, shows the underlying issue with trusting it to do anything without a human

SirHumphrey 2 hours ago | parent | prev | next [-]

Amara’s law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

ch4s3 2 hours ago | parent | prev [-]

>If it had a generous built in HTTP cache, and instruction to maximise use of the cache, then it could avoid a lot of re-fetching of content, which would help reduce the harms.

While this is a great idea, the harms are somewhat overblown. The big scare number for water consumption includes water used in power generation which itself includes evaporation from hydroelectric power.

simianwords 19 minutes ago | parent | prev | next [-]

This person gives me the vibe that they are so attached to their craft that they can't seem to do anything about LLM's ubiquity rising but scold and vaguely sloganeer.

Was this how other professionals dealt with their grief? Like a translator in the advent of ML based translations? Like a lift man?

throwaway743 a few seconds ago | parent | prev | next [-]

The footer rules

keiferski 2 hours ago | parent | prev | next [-]

I think it's probably accurate to say that the vast majority of writers throughout history were writing for an extremely tiny or nonexistent audience. My favorite example of this is Nietzsche, who basically had zero readership during most of his life, beyond a few close friends, and even had to personally pay to get his books published. He only posthumously became one of the most influential thinkers of the 20th century.

So while I do worry about AI's impact on blogging/writing/etc., I do think to some extent, you either love the process or you don't. If you only write in order to have readers, you're in the wrong game.

justonceokay an hour ago | parent [-]

There are a lot of arts that were funded for a short time in our recent history that previously were absolutely not funded at all under almost any circumstance. Centralized media created centralized stars with centralized incomes

For the vast majority of history it was all community theater and “that guy in the square guy who knows the lute”

pklausler an hour ago | parent | prev | next [-]

> First let’s accept the realities. The giant plagiarism machines have already stolen everything. Copyright is dead. Licenses are washed away in clean rooms. Mass surveillance and tracking are a feature, privacy is a bug. Everything is an “algorithm” optimised to exploit.

Suppose that I have discovered a novel algorithm that solves an important basic problem much more efficiently than current techniques do. How do I hide it from the web scrapers that will steal it if I put it on GitHub or elsewhere? Should I just write it up as a paper and be content with citations and minor glory? Or should I capture AI search results today for "write me code that does X", put my new code up under a restrictive license, capture search results a day later, demonstrate that an AI scraper has acquired the algorithm in violation of the license, and seek damages?

muskstinks 3 hours ago | parent | prev | next [-]

One problem writing does have: we grew up in a massive changing and progressing software writing area. A golden area.

Now i still show clean code videos from bob and other old things to new hires and young collegues.

Java got more features, given but the golden area of discovery is over.

The new big thing is ai and i'm curious to see how it will feel to write real agents for my company specific use cases.

But i'm also seeing people so bad in their daily jobs, that I wish to get their salary as tokens to use. It will change and it changes our field.

Btw. "Is there anything, in the entire recorded history of human creation, that could have possibly mattered less than the flatulence Sora produced? NFTs had more value." i disagree, video generation has a massive impact on the industry for a lot of people. Don't down play this. NFTs btw. never had any impact besides moving money from a to b

flir 3 hours ago | parent | next [-]

> But i'm also seeing people so bad in their daily jobs, that I wish to get their salary as tokens to use.

Oof. The modern "Go away or I will replace you with a very small shell script"

sd9 3 hours ago | parent | prev [-]

At least have the grace to show new hires Rich Hickey lectures or something. Uncle bob is nonsense.

muskstinks 2 hours ago | parent [-]

I'm giving them a lot more but I assumed people know uncle bob. Like the Open Source Architecture Books, Googles SRE Books, 1:1 mentoring every week.

But yeah there is one person made of teflon. Nothing sticks. And i could tell you that teflon person in every company i worked so far.

sd9 2 hours ago | parent [-]

I know what you mean. Tbh I just think this isn’t for everyone. I’ve been in your position before, you can try everything but some people just can’t get it. And maybe they do and they become a little more productive, but they can’t produce production quality stuff that isn’t brittle.

I’ve never found a way around it, and I don’t want to believe that some people can’t grok this field, but that is what I’ve experienced. Maybe other people can educate better.

I’ve just found that at some point you have to limit the blast radius and move onto more productive uses of your own time.

OJFord 3 hours ago | parent | prev | next [-]

Paha, I thought this domain was 'D-Bus Hell' until I clicked in. (It's D. Bushell's blog.)

giancarlostoro 2 hours ago | parent | prev | next [-]

I find it funny how clanker took off and everyone uses it. It was edited in a video where someone was otherwise saying something extremely racist (the more offensive version of the n-word). For those curious it involves a burger king hat, schizophrenia and an airplane, someone edited the n-word out and put clanker with AI (because why not insult AI by using AI?). I do wonder if the AI uprising will involve robots killing anyone who used clanker in a derogatory way and sparing everyone else.

Also, yes, I know the origin is Star Wars, but it went viral recently a very specific way.

The power of edgelord memes.

damnitbuilds an hour ago | parent [-]

Years until we are forbidden from writing Clanker and have to write C***r: 3

giancarlostoro an hour ago | parent [-]

I had Claude write me some lyrics about Clankers after that one guy had an AI write a hit piece about him over denying his PR.

alfanick 2 hours ago | parent | prev | next [-]

I quit. The clankers won.

I don't see any proof that software development is not dead. Software engineering is not, and it's much more than writing code, and it can be fun. But writing code is dead, there is no point of doing it if an LLM can output the same code 100x faster. Of course, architecture and operations stays in our hands (for now?).

Initially I was very sceptic, first versions of ChatGPT or Claude were rather bad. I kept holding to a thought that it cannot get good. Then I've spend a few months evaluating them, if you know how to code, there is no point of coding anymore, just instruct an LLM to do something, verify, merge, repeat. It's an editor of some sorts, an editor where you enter a thought and get code as an output. Changes the whole scene.

catlifeonmars 2 hours ago | parent | next [-]

I think it’s really context dependent. I haven’t found LLMs to increase my productivity in coding in my field because the quality of the output matters much more than the quantity. I don’t think it’s the same across the board though, and there are plenty of domains where code generation is a force multiplier. Sometimes you need a chainsaw and sometimes you need a scalpel and in my own experience I have found that using coding agents as scalpels is not a very efficient use of my time. shrug

chasd00 2 hours ago | parent | prev | next [-]

i think these are great for people who already are senior developers with years of coding experience. I can use claudecode and then walk through the output and spot fix small mistakes or notice when it's going in the wrong direction and prompt it to fix. I think people without years of development experience using these tools can really screw themselves. The problem is every new grad is going to use claudecode right from the start without a decade of hand coding to develop that wisdom.

on the other hand, i can't help but think about ASM coders lamenting C and especially C++. Also, god help you if you tell an embedded developer you use micropython instead of C. Maybe a current chapter is closing and a new one is beginning and my part was in the last chapter just like them.

i'll end with saying i really like using AI for code, it's got me excited about technology again. So many projects that were out of reach due to time ( i have a family + stressful career ) are now back on the table like when i was in college with nothing but time on my hands.

zozbot234 2 hours ago | parent | prev | next [-]

LLM's don't really output the same code quality as a human, even on the smallest scale. It's not even close. Maybe you can guide them to refactor their slop up to human-written quality, but then you're still coding. You're just doing it by asking the computer to write something, instead of physically typing the whole thing out with a keyboard.

mcdeltat 2 hours ago | parent | next [-]

Yeah I also keep thinking this. I don't see LLMs reliably producing code that is up to my standards. Granted I have high standards because I do take pride in producing high quality code (in all manner of metrics). A lot of the time the code works, unfortunately only for the most naive, mechanical definition of "works".

phpnode 2 hours ago | parent | prev [-]

This just isn’t true at all, with guidance and guard rails they produce much better code than the average developer does. They are only going to get better.

draxil 2 hours ago | parent | prev [-]

Useful tool, and if you're just scratching a small itch it's great.

For any serious system you still need to understand and guide the code, and unless you do some of the coding.. You won't. It's just novelty right now is skewing our reasoning.

abeppu an hour ago | parent | prev | next [-]

I think the "Leave them Behind" section at the end sort of ignores the whole "they will ruthlessly copy your material, and put aggressive extra load on your server while repeatedly stealing your work" dimension.

You can try to avoid consuming AI-generated material, but of course part-way through a lot of things you may wonder whether it is partly AI-generated, and we don't yet have a credible "human-authored" stamp. But you can't really keep them from using your work to make cheap copies of you, or at least reducing your audience by including information or insights from your work in the chat sessions of people who otherwise might have read your work.

yabutlivnWoods 11 minutes ago | parent | prev | next [-]

Such a bizarre sentiment the web and internet as we know/knew them is some bastion of freedom and future for humanity.

According to the author AI is 99% hype.

That 1% of AI utility can unlock more for humanity than 99.999% of blogs; static text hosted from a laptop in a closet.

Odd ball position that cheap publishing via the web is a path to The Next Generation for humanity is 100% hype.

Other than feeding dopamine addiction humanity has not improved greatly since we read all those insipid posts on GeoCities no one remembers today.

It's all been 99%+ hype to feed Wall Street. Young GenX and older Millennials with tech jobs were temporary political pawns and gonna end up bag holders like may older GenXers and Boomers who lived through the car boom, the housing boom, the retail boom.

Same old human shit, different hype.

rickcarlino 2 hours ago | parent | prev | next [-]

LLMs can produce text information but they cannot have experiences. Writing about authentic experience is still a worth while endeavor. Expression of a preference is also an experience when framed correctly.

justonceokay an hour ago | parent [-]

I think about Irish and British writing of dialogue, where is is extremely common for characters to only just now realize the importance of something their interlocutor said, and backtrack the conversation. Often this is done for humor. (Think about characters correcting each other with the use of “exacerbate” in Shaun of the Dead)

The only way to write like that is to have a real theory of mind for the two characters and understand that they are four processing speeds: that of both speakers, that of the narrator, and that of the reader.

Havoc 3 hours ago | parent | prev | next [-]

> The only winning move is not to play.

Alas I think tech crowd have collectively painted humanity into a corner where not playing is not an option anymore.

The combination of having subverted copyright and enabled cheap machine replication kills large swaths of creativity. At least as a viable living. One can still do many things on an artisanal level certainly and as excited as I am about AI it’s hard not to see it as a big L for humanity’s creative output

niek_pas an hour ago | parent [-]

Interestingly, Ireland just launched a Basic Income for the Arts scheme. Many caveats (I think it's only like 300 euros a month, for a small group of people, etc.) but an interesting development nonetheless.

gkoenig 3 hours ago | parent | prev | next [-]

Man I love the design of your site, and that goldfish made my day.

For the article it was nice, but the font is really what got me.

cl0ckt0wer 4 hours ago | parent | prev | next [-]

Just because they invented cars doesn't mean you stop jogging.

bicx 3 hours ago | parent | next [-]

When they invented cars (and cars became popular and affordable), people did stop walking everywhere. Jogging wasn’t popularized until the 1970s, when we all realized we needed to be intentional with fitness in our car-based society.

rglynn an hour ago | parent | next [-]

This is a US-centric take, in Europe, particularly in cities, we walk everywhere.

There is perhaps some relevance to the analogy however, because the US is designed in such a way that makes walking difficult to impossible. I am already seeing this pattern in vibe-coded areas where engineers will just use AI because it's too difficult to parse and edit by hand.

tasuki 2 hours ago | parent | prev [-]

> people did stop walking everywhere.

I didn't. Yesterday I walked 11 km for errands. Today I took a detour when walking to work, a more scenic route with less traffic.

For me walking is not much slower than using public transport (you need to get to it, then from it to the point of your destination), and not much slower than a car (stuck in traffic, finding parking, not to mention the road rage). I'd be faster on a bicycle but I'm not in a hurry and enjoy my walks.

simgt 3 hours ago | parent | prev | next [-]

They did make it very hard for people to do anything else but use a car in many, many places though...

OJFord 3 hours ago | parent [-]

In the US, perhaps, which has had perhaps the bulk of its growth post-automobiles.

jordanb 3 hours ago | parent | prev | next [-]

> Just because they invented cars doesn't mean you stop jogging.

They literally made it a crime to walk down the street.

bombcar 3 hours ago | parent [-]

across the street, no?

It's also a crime to jog on the railroad tracks.

beeflet 3 hours ago | parent [-]

If it's a crime to jog on railroad tracks, and the avalibility of rail makes it so that everything you need is only accessible by rail, I conclude that rail prevents you from jogging.

bombcar 3 hours ago | parent [-]

I'm sorry for all the people who lived in my original SimCity towns. They must have been nearly spherical.

mmustapic 3 hours ago | parent | prev | next [-]

The one I like better is: software is great at playing chess, doesn't mean you cannot play too

ramon156 3 hours ago | parent | prev | next [-]

Read the post, it's a gotcha ;P I was scared too

guzfip 3 hours ago | parent | prev | next [-]

No but everyone has gotten real fat since then.

pitched 3 hours ago | parent | prev [-]

I really like the sentiment and will quote this in the future! My own thoughts line up a bit closer to the article though, with this quote being a good summary of it:

> The 1% utility AI has is overshadowed by the overwhelming mediocracy it regurgitates.

rtpg 3 hours ago | parent | prev | next [-]

Old web stuff is still around. RSS feeds are out there. Some parts of masto are generally chill and filled with people having interesting convos.

You don't have to give up on everything to participate, but it can be a space to go to if you're tired of every social interaction being mediated by (I'm being glib) hustlers

zetanor 2 hours ago | parent | prev | next [-]

>One upside of this looming economic and intellectual depression is that the media is beginning to recognise gate keepers are no longer the hand that feeds them.

In what world is "the media" not an integral, tightly-bound part of the ratchet mechanism that seeks to suppress all distinction?

coldtea 2 hours ago | parent | prev | next [-]

>It’s never been more important to blog. There has never been a better time to blog. I will tell you why. We’re being starved for human conversation and authentic voices

The supposedly starved don't seem to care much for such food. Blogs are kind of a wasteland.

code_for_monkey an hour ago | parent | prev | next [-]

I work for a bank and Im basically just an ai user now. Honestly its like pulling teeth to get anyone to look at your code here. Im just on hackernews lol

rglover an hour ago | parent [-]

Major or regional? This is alarming beyond words.

code_for_monkey an hour ago | parent [-]

Major, big big major, not just me either. How is it alarming beyond words? If I said "Im an emt and I just ask the AI everything" now that would be alarming beyond words.

Eextra953 34 minutes ago | parent | next [-]

I have a buddy who is a cop and he tells me that they use AI to write reports and even to check if their reason for pulling someone over will hold up later. As annoying as it is in SW, people using AI outside of SW is much more alarming.

rglover 29 minutes ago | parent | prev [-]

Because it's a bank and apparently a big one.

okokwhatever 11 minutes ago | parent | prev | next [-]

When will we understand that not everybody works in a FAANG? Assuming that the way to put some food in the table for all software developers is always a matter of creating a new magical algor. in a mystical programming language deployed in an unicorn architecture is so childish. 99% of all software development today is simply creating a crud or refactoring a codebase because React guys decided to change everything again.

prplfsh 42 minutes ago | parent | prev | next [-]

I honestly don't get it - how isn't everyone having a blast with AI? Every one of those side projects you never had time for you can build in a weekend. You can explore five ideas at once. You can do big refactors/cleanups you'd never be able to dream of in the past. As a software engineer it's been fantastic.

tenahu 30 minutes ago | parent | next [-]

That has been my feeling too. I have completed soo many personal projects (or improvements) that were collecting dust on my 'mental shelf'.

Without AI I would probably never get to them because realistically, I do not have dozens, or hundreds of personal hours to devote to fun, but unnecessary projects.

drchickensalad 6 minutes ago | parent | prev | next [-]

You do know that there's many different personalities that people can have right? A lot of people love writing code and don't care about those things at all.

glouwbug 9 minutes ago | parent | prev [-]

Some of us work on critical systems

danesparza an hour ago | parent | prev | next [-]

I just chased a few interesting rabbit holes because of the links to other articles in this article. Thank you for that. ;-)

Spacecosmonaut 3 hours ago | parent | prev | next [-]

"Generative AI is art. It’s irredeemably shit art; end of conversation."

I think most people cannot destinguish between "genuine" creativity and an artificial almalgamation of training data and human provided context. For one, I do not know what already exsists. Some work created by AI may be an obvious rip off of the style of a particular artist, but I wouldnt know. To me it might look awesome and fresh.

I think many of the more human centric thinkers will be disappointed at how many people just wont care.

none2585 3 hours ago | parent | next [-]

Further I'd argue we KNOW people don't care if you look at the music industry.

Pop music is often composed by dozens of people who specialize in a thin sliver of the track - lyrics, vocals, drums, &c. - and then it's given a pretty face and makes the charts. That's really no different than something like Suno.

I think AI is forcing people who thought that THEIR thing was too nuanced or too complex to be replaced by technology to reckon with what makes them special.

apples_oranges 3 hours ago | parent | prev | next [-]

The question is how subtle AI can be. I feel like art sometimes seems to communicate A, and the artist intended to communicate A and perhaps some B, but clearly, it also hints at another C (and maybe also D, E, ..), which was not intended by the artist or recognised by many viewers, while to some people it's clearly there. Now where did that come from?

And can or will AI create it?

chii 3 hours ago | parent | prev | next [-]

most people are just utilitarian and do not care for "art" (in the high art sense).

AI is perfect for that. It reveal, perhaps to the dismay of those who revel in high art, that it might be an illusion that art has genuine creativity, if most people find ai to produce acceptable output.

esafak 2 hours ago | parent | prev [-]

People have been having this debate with popular art forever. Some people do not even believe in taste, and that everyone's artistic opinions have equal merit.

ceplabs 3 hours ago | parent | prev | next [-]

This might be the coolest personal website theme I've ever seen.

aaa_aaa an hour ago | parent | prev | next [-]

On the copyright stuff, I say good riddance.

butlike 2 hours ago | parent | prev | next [-]

A non-sequitur, but I really like the style of the blog. Good job.

juleiie 2 hours ago | parent | prev | next [-]

Yeah well I just don’t care about „AI dark forest”

You seriously need to go outside and touch grass if you are so defeated by another chess winning machine

Nobody wants to Watch AI play chess, nobody wants to read ai blogposts

AI makes human writing more valuable, not less.

I will pay good money for pure human made books certified as made without a single word auto generated whether in original or during process of Translation.

Bridged7756 26 minutes ago | parent | prev | next [-]

I just don't see it. What's truly sad is that a field allegedly filled with technically proficient people is filled with trend chasing and FOTM trends. To this day, people with nuanced takes are far and between and most people either go full force into LLM evangelism or LLM denial, the first more annoying, by far.

I'm really getting tired of the programming obituaries. As if LLMs didn't fail at any complex task, as if they didn't vomit shit code and as if they didn't just copy patterns surrounding the new code, and as if they didn't hallucinate and downright write wrong code or made up libraries. Yet for some reason, every time you bring it up, someone will come along and say "You're not using it right then.", Is it that, or is it just that they're only doing toy projects? I'm led to believe the latter.

At this point I don't know what's organic and what's not. Reddit is filled with astroturfing for big LLM. Maybe this place is too? Even if that was not the case, I'm led to believe that it isn't uncommon for people to swallow up all of the big LLM propaganda and fall into despair, or fall into unrealistic expectations, and just parrot it everywhere else. One thing is for certain, LLM evangelism has all the money of the world, and LLM denial doesn't. It's only natural to think that the balance is tilted in terms of media presence.

Even at the best, or worst, LLMs can't do anything you couldn't do yourself better with a scaffolding prompt + manual editing, and at the end of the day, you still need the cognitive energy to review, veto, come up with, the implementation. What does this exactly do anyways other than saving you a bunch of keypresses? I wonder if the people touting it to be all that really didn't think before LLMs, of just switched their brain off on them.

I used to really like this site, but I think that just consuming the RSS feed is enough for me. I think that lobster.rs has less "trend chasing" point of views these days, and I do wonder if it might be on here there's larger amounts of non technical people jumpy to call for the funeral of things.

titzer 3 hours ago | parent | prev | next [-]

I laugh jollily in the face of AI. I know the coming shit pile, its nature isn't going to be surprising, only the speed and utter surrender of the vast majority of humanity to mediocrity.

What AI represents to me is a teacher! I have so long lacked a music teacher and musical tools. I spent my entire career doing invisible software at the lowest levels and now I can finally build cool tools that help me learn and practice and enjoy playing music! Screw all the haters; if you're curious about a wide range of topics and already have some knowledge, you can galavant across a vast space and learn a lot along the way.

AI is a bit of a bullshitter but don't take its bullshit as truth, like you should never take anything your teacher says as gospel. How do we know what's true? The truth of the universe and the world is that underneath it all, it is self consistent, and we keep making measurement errors. The AI is an enormous pot of magic that it's up to you to organize with...your own skills.

You have to actively resist deskilling by doing things. AI should challenge you and reward you, not make you passive.

Use AI to teach yourself by asking lots of questions and constantly testing the results against reality.

For me right now, that's the fretboard.

Lerc 2 hours ago | parent | prev | next [-]

Blog posts are an interesting case, they are a very good example of something where abundance of supply outstrips any demand so much that it cannot be realistic to expect a median level contribution to receive any significant attention.

Setting aside the self delusion that makes a considerable number to erroneously rate themselves above average, the reason you create blog posts should not be for the attention you might gain, there simply are not the eyeballs. You create as a form of self expression, to organise your thoughts, to create a record of them.

AI can never challenge in those areas because it is, as it has always been, the act of creation is the goal.

LunicLynx 2 hours ago | parent | prev | next [-]

This feels weird somehow. It feels like: Damn we can’t train our AI any better as everything regurgitated slop now. How can we get people to create new content for us, hopefully with new ideas …

Might be just me though, but I definitely don’t get why blogging should be the solution.

__s 2 hours ago | parent | prev | next [-]

misleading title. MODS?

randallsquared 2 hours ago | parent | prev | next [-]

> to put a price tag on creation.

I mean, to put a price tag on enabling vastly more creation than would otherwise have occurred!

deadbabe 2 hours ago | parent | prev | next [-]

I’ve decided the only way I’ll adopt a full automated agentic AI workflow the way companies want, is if I am allowed to hold multiple jobs at multiple companies.

Imagine having 6 software engineering jobs, each paying maybe $150k a year, all being done by agents.

Hell, I might even do this secretly without their consent. If I can hold just 10 jobs for about 3 or 4 years, I can retire and leave the industry before it all comes crumbling down in 2030.

The problem of course, is securing that many jobs. But maybe agents can help with applying for jobs.

bombcar 3 hours ago | parent | prev | next [-]

You have to write for yourself. People have said this for years, decades, millennia even - but nobody really believes that writing to an audience of zero (or one, if Mom is still around) is worth it.

Everyone wants to be a famous author, or at least a published/somewhat acknowledged one; few are willing to write their novel and be satisfied with zero or near-zero sales/readings.

But that is exactly what you need to do, especially in the age of AI. Everyone who was "in it to win it" (think linkedinslop which existed before AI) is going to certainly use AI - because they do not give a shit about the quality of themselves - they just want the result.

And you can only become a writer (unpublished, unread, or no) by doing the writing - it takes time (10,000 hours?) that cannot be replaced by AI, just like you can't have the body of a marathon runner without running (yes, yes, the joke). You may be able to get 26 miles and change away, even very fast, but unless you personally do the running of that distance without cheating, you will not get the inherent benefits.

And if you instruct an AI, or another human even, to write for you, you may get some of the results you want, but you won't have changed to become a writer.

We shouldn't celebrate the successful blogs; they're already rewarded enough. It's celebrating the unsuccessful blogs that is needed - even if, frankly, the vast majority of them are sub-AI levels of crap it is still a human changing and progressing behind them.

Babies fall over a lot but unless you take them out of the stroller and let them do so, they'll never progress to crawling, walking, running.

fragmede 2 hours ago | parent [-]

Do people who journal exist in your world view?

erelong 3 hours ago | parent | prev | next [-]

"You can just blog things"

dare944 3 hours ago | parent [-]

"Let them write blogs!"

marknutter 2 hours ago | parent | prev | next [-]

> I’m not protective over the word “art”. Generative AI is art. It’s irredeemably shit art; end of conversation. A child’s crayon doodle is also lacking refined artistry but we hang it on our fridge because a human made it and that matters.

More pretentious gatekeeping from luddites who like to yell at clouds. This is someone who would love a piece of artwork created using ai tools right up until someone told them it was created using ai tools.

bjourne 4 hours ago | parent | prev | next [-]

Fucking hilarius domain name . David is unfortunately not announcing a rewrite of the Linux IPC stack!

bombcar 3 hours ago | parent | next [-]

Ha I read it as the DBU Shell but I guess dbus hell is more natural.

CrzyLngPwd 3 hours ago | parent | prev | next [-]

How did I miss DBUS Hell haha

moron4hire 2 hours ago | parent | prev [-]

Real PenIsland.com vibes.

zzzeek 3 hours ago | parent | prev | next [-]

rants about AI from people who have already decided up front to never actually attempt to use the tools (which seems to be the case here from the post and the other one it links) are not really providing any value to the discourse.

There is nothing new about using machinery to automate boring / repetitive tasks, including the wall of resistance that comes up. But it should be clear that genuinely useful tooling and automation tends to become a normal part of life, from the plow, to the printing press, to the dishwasher, to digital video editing, to autocorrect, and now to large language models.

There's a lot that has to be worked out with LLMs in particular as they are now encroaching heavily upon human creativity and thought. This is an extremely important topic. But rants like these with terms like "the plagarism machine" and "the solution is that we all must vow to never use AI in any shape or form" are not really contributing.

gruntbuggly 2 hours ago | parent | next [-]

We're starting to rethink what an over reliance on plow based tilling has done for soil health. The point being that technologies are tradeoffs and it's helpful to understand the tradeoffs we are making.

nodra 3 hours ago | parent | prev | next [-]

Trying to understand why it would matter if their hosting provider used ai or not. Genuine question so I can understand your take.

kasG 3 hours ago | parent | prev [-]

You are a good employee! Python people always shill for their employer's opinions.

oompydoompy74 3 hours ago | parent | prev | next [-]

Good lord I’m going to have to figure out some way to filter Hacker News. I’m so tired of this same sort of article (and the opposite) being posted every day. AI isn’t going away. AI is better than you think it is. AI is probably also worse than you think it is. The world has nuance, so can we please all chill?

mchaver 2 hours ago | parent | next [-]

These conversations can add to the nuance. Anyway, you can just vibe code the filter you want and be done with it.

oompydoompy74 an hour ago | parent [-]

I disagree that this adds anything new to the conversation, but fair enough.

jdefr89 2 hours ago | parent | prev [-]

Sort of hard to do because AI it shoved down your throat in one form or another virtually everywhere you go. I also think a lot of us Hackers are mourning the fact we spent many years mastering machines and programming just to have the skill devalued (at least from the publics perspective) nearly over night. I personally think it is more important now more than ever to understand technology. To be able to write code, understand how a CPU works etc. Tech literacy will help prevent doom scenarios. A future where virtually everyone depends on AI and Computers but lacks people who actually understand them from a low level perspective seems bleak. I know thinking itself seems to have gone out of fashion and its given rise to misinformation and/or political nonsense like the rise of fascism etc... I think a lot of us just feel "empty" and are trying to express it.

mchaver 2 hours ago | parent | next [-]

I agree that humans should continue to value various forms of literacy even in the face of AIs that can do everything better than us. I too will continue to dig deeper into tech literacy. There was a Terence Tao paper recently that mentioned we are in a shift similar to the end of heliocentrism. It made clear that Earth is not the center of the universe, but Earth is still deeply valuable and important for humans. Much the same way that AI may supersede our understanding and intellect and make the are limitations more apparent, but our human intellect is still important to humans. Plus, what are you going to do when the price of LLM tokens are through the roof or you get messages like "burn an extra 1,000,000 tokens for a better implementation!".

oompydoompy74 an hour ago | parent [-]

I have some amount of hope that local open models with sufficient quantization are the future as hardware becomes more powerful and models become more optimized. I don’t think we will be living in thin client land forever. Human expertise and intelligence will continue to be important and anyone who says otherwise is being disingenuous.

oompydoompy74 2 hours ago | parent | prev [-]

I get it. I’ve been doing this for 11 years. I use agents everyday at work now and deal with all the benefits and problems of that. The craft is certainly changing and it will take years for everything to shake out and settle. I understand the desire to publicly wax poetic, but nobody actually knows shit about where we will land, so it gets a bit tiresome to see over and over.

btreecat 3 hours ago | parent | prev | next [-]

"No AI" right above a robot voice playback button.

Mixed messages fr

Hot take, folks packing it in because of AI probably were not difference makers before AI, and wouldn't be difference makers after it either.

I agree with the author, keep writing. It helps hone your ability to communicate effectively which we all need for some time to come (at least until we become batteries).

pitched 3 hours ago | parent | next [-]

> folks packing it in because of AI probably were not difference makers before AI

Anecdotal but I’ve been seeing a lot of the opposite. Some of those leaning in strongly are being propped up by the tools. Holding onto them like a lifeboat when they would have fallen off earlier.

keybored 3 hours ago | parent | prev [-]

What does a synthesized audio playback button have to do with AI as commonly and hotly discussed?

btreecat 2 hours ago | parent [-]

> sythansized audio playback

Thats generated audio. It may not be LLM generated but it's not read by a human.

To draw an arbitrary line between _this kind_ of generated content but not _that kind_ is seemingly a matter of perspective and preferences.

sdevonoes 3 hours ago | parent | prev [-]

Can’t we just sabotage AI? We have the means for sure (speed light communication across the globe). Like, at least for once in the history of software engineering we should get together like other professionals do. Sadly our high salaries and perks won’t make the task easy for many

- spend tons of tokens on useless stuff at work (so your boss knows it’s not worth it)

- be very picky about AI generated PRs: add tons of comments, slow down the merge, etc.

bendmorris 2 hours ago | parent | next [-]

I think you should be very picky about generated PRs not as an act of sabotage but because very obviously generated ones tend to balloon complexity of the code in ways that makes it difficult for both humans and agents, and because superficial plausibility is really good at masking problems. It's a rational thing to do.

Eventually you are faced with company culture that sees review as a bottleneck stopping you from going 100x faster rather than a process of quality assurance and knowledge sharing, and I worry we'll just be mandated to stop doing them.

zozbot234 2 hours ago | parent | prev | next [-]

> be very picky about AI generated PRs: add tons of comments, slow down the merge, etc.

But that's the opposite of sabotage, you're actually helping your boss use AI effectively!

> spend tons of tokens on useless stuff at work (so your boss knows it’s not worth it)

Yes, but the "useless" stuff should be things like "carefully document how this codebase works" or "ruthlessly critique this 10k-lines AI slop pull request, and propose ways to improve it". So that you at least get something nice out of it long-term, even if it's "useless" to a clueless AI-pilled PHB.

xyzal 2 hours ago | parent | prev [-]

Generate hundreds repos of plain old spaghetti code and put it on github. Easiest thing you can do.