Remix.run Logo
Karpathy on Programming: “I've never felt this much behind”(twitter.com)
384 points by rishabhaiover 3 days ago | 418 comments
rambojohnson 10 hours ago | parent | next [-]

What exhausts me isn’t “falling behind.” It’s watching the profession collectively decide that the solution to uncertainty is to pile abstraction on top of abstraction until no one can explain what’s actually happening anymore.

This agentic arms race by C-suite know-nothings feels less like leverage and more like denial. We took a stochastic text generator, noticed it lies confidently, wipes entire databases and harddrives, and responded by wrapping it in managers, sub-agents, memories, tools, permissions, workflows, and orchestration layers so we don’t have to look directly at the fact that it still doesn’t understand anything.

Now we’re expected to maintain a mental model not just of our system, but of a swarm of half-reliable interns talking to each other in a language that isn’t executable, reproducible, or stable.

Work now feels duller than dishwater, enough to have forced me to career pivot for 2026.

simonw 9 hours ago | parent | next [-]

I think AI-assisted programming may be having the opposite effect, at least for me.

I'm now incentivized to use less abstractions.

Why do we code with React? It's because synchronizing state between a UI and a data model is difficult and it's easy to make mistakes, so it's worth paying the React complexity/page-weight tax in order for a "better developer experience" that allows us to build working, reliable software with less typing of code into a text editor.

If an LLM is typing that code - and it can maintain a test suite that shows everything works correctly - maybe we don't need that abstraction after all.

How often have you dropped in a big complex library like Moment.js just because you needed to convert a time from one format to another, and it would take too long to hand-write that one feature (and add tests for it to make sure it's robust)? With an LLM that's a single prompt and a couple of minutes of wait.

Using LLMs to build black box abstraction layers is a choice. We can choose to have them build LESS abstraction layers for us instead.

roadside_picnic 8 hours ago | parent | next [-]

> If an LLM is typing that code - and it can maintain a test suite that shows everything works correctly - maybe we don't need that abstraction after all.

I've had plenty of junior devs justify massive code bases of random scripts and 100+ line functions with the same logic. There's a reason senior devs almost always push back on this when it's encountered.

Everything hinges on that "if". But you're baking a tautology into your reasoning: "if LLMs can do everything we need them to, we can use LLMs for everything we need".

The reason we stop junior devs from going down this path is because experience teaches us that things will break and when they do, it will incur a world of pain.

So "LLM as abstraction" might be a possible future, but it assumes LLMs are significantly more capable than a junior dev at managing a growing mess of complex code.

This is clearly not the case with simplistic LLM usage today. "Ah! But you need agents and memory and context management, etc!" But all of these are abstractions. This is what I believe the parent comment is really pointing out.

If AI could do what we originally hoped it could: follow simple instructions to solve complex tasks. We'd be great, and I would agree with your argument. But we are very clearly not in that world. Especially since Karpathy can't even keep up with the sophisticated machinery necessary to properly orchestrate these tools. All of the people decrying "you're not doing it right!" are emphatically proving that LLMs cannot perform these tasks at the level we need them to.

simonw 7 hours ago | parent | next [-]

I'm not arguing for using LLMs as an abstraction.

I'm saying that a key component of the dependency calculation has changed.

It used to be that one of the most influential facts affecting your decision to add a new library was the cost of writing the subset of code that you needed yourself. If writing that code and the accompanying tests represented more than an hour of work, a library was usually a better investment.

If the code and tests take a few minutes those calculations can look very different.

Making these decisions effectively and responsibly is one of the key characteristics of a senior engineer, which is why it's so interesting that all of those years of intuition are being disrupted.

The code we are producing remains the same. The difference is that a senior developer may have written that function + tests in several hours, at a cost of thousands of dollars. Now that same senior developer can produce exactly the same code at a time cost of less than $100.

all_factz 6 hours ago | parent | next [-]

React is hundreds of thousands of lines of code (or millions - I haven’t looked in awhile). Sure, you can start by having the LLM create a simple way to sync state across components, but in a serious project you’re going to run into edge-cases that cause the complexity of your LLM-built library to keep growing. There may come a point at which the complexity grows to such a point that the LLM itself can’t maintain the library effectively. I think the same rough argument applies to MomentJS.

chairmansteve 5 minutes ago | parent | next [-]

"React is hundreds of thousands of lines of code".

Most of which are irrelevant to my project. It's easier to maintain a few hundred lines of self written code than to carry the react-kitchen-sink around for all eternity.

simonw 5 hours ago | parent | prev | next [-]

If the complexity grows beyond what it makes sense to do without React I'll have the LLM rewrite it all in React!

I did that with an HTML generation project to switch from Python strings to Jinja templates just the other day: https://github.com/simonw/claude-code-transcripts/pull/2

DrammBA 4 hours ago | parent | next [-]

Simon, you're starting to sound super disconnected from reality, this "I hit everything that looks like a nail with my LLM hammer" vibe is new.

simonw 4 hours ago | parent | next [-]

My habits have changed quite a bit with Opus 4.5 in the past month. I need to write about it..

godelski 3 hours ago | parent | next [-]

What's concerning to many of us is that you've (and others) have said this same thing s/Opus 4.5/some other model/

That feels more like chasing than a clear line of improvement. It's interrupted very different from something like "my habits have changed quite a bit since reading The Art of Computer Programming". They're categorically different.

pertymcpert 2 hours ago | parent [-]

Opus 4.5 is categorically a much better model from benchmarks and personal experience than Opus 4.1 & Sonnet models. The reason you're seeing a lot of people wax about O4.5 is that it was a real step change in reliable performance. It crossed for me a critical threshold in being able to solve problems by approaching things in systematic ways.

Why do you use the word "chasing" to describe this? I don't understand. Maybe you should try it and compare it to earlier models to see what people mean.

godelski 23 minutes ago | parent [-]

  > Why do you use the word "chasing" to describe this?
I think you'll get the answer to this if you read my comment and your response to understand why you didn't address mine.

Btw, I have tried it. It's annoying that people think the problem is not trying. It was getting old when GPT 3.5 came out. Let's update the argument...

v64 4 hours ago | parent | prev | next [-]

Looking forward to hearing about how you're using Opus 4.5, from my experience and what I've heard from others, it's been able to overcome many obstacles that previous iterations stumbled on

indigodaddy 3 hours ago | parent | prev | next [-]

Can you expound on Opus 4.5 a little? Is it so good that it's basically a superpower now? How does it differ from your previous LLM usage?

pertymcpert 2 hours ago | parent [-]

To repeat my other comment:

> Opus 4.5 is categorically a much better model from benchmarks and personal experience than Opus 4.1 & Sonnet models. The reason you're seeing a lot of people wax about O4.5 is that it was a real step change in reliable performance. It crossed for me a critical threshold in being able to solve problems by approaching things in systematic ways.

remich 3 hours ago | parent | prev [-]

Please do. I'm trying to help other devs in my company get more out of agentic coding, and I've noticed that not everyone is defaulting to Opus 4.5 or even Codex 5.2, and I'm not always able to give good examples to them for why they should. It would be great to have a blog post to point to…

dimitri-vs 4 hours ago | parent | prev [-]

Reality is we went from LLMs as chatbots editing a couple files per request with decent results. To running multiple coding agents in parallel to implement major features based on a spec document and some clarifying questions - in a year.

Even IF llms don't get any better there is a mountain of lemons left to squeeze in their current state.

zdragnar 5 hours ago | parent | prev [-]

That would go over on any decently sized team like a lead balloon.

simonw 4 hours ago | parent [-]

As it should, normally, because "we'll rewrite it in React later" used to represent weeks if not months of massively disruptive work. I've seen migration projects like that push on for more than a year!

The new normal isn't like that. Rewrite an existing cleanly implemented Vanilla JavaScript project (with tests) in React the kind of rote task you can throw at a coding agent like Claude Code and come back the next morning and expect most (and occasionally all) of the work to be done.

zdragnar 3 hours ago | parent | next [-]

And everyone else's work has to be completely put on hold or thrown away because you did the whole thing all at once on your own.

That's definitely not something that goes over well on anything other than an incredibly trivial project.

pertymcpert 2 hours ago | parent [-]

Why did you jump to the assumption that this:

> The new normal isn't like that. Rewrite an existing cleanly implemented Vanilla JavaScript project (with tests) in React the kind of rote task you can throw at a coding agent like Claude Code and come back the next morning and expect most (and occasionally all) of the work to be done.

... meant that person would do it in a clandestine fashion rather than this be an agreed upon task prior? Is this how you operate?

zdragnar 38 minutes ago | parent | next [-]

My very first sentence:

> And everyone else's work has to be completely put on hold

On a big enough team, getting everyone to a stopping point where they can wait for you to do your big bang refactor to the entire code base- even if it is only a day later- is still really disruptive.

The last time I went through something like this, we did it really carefully, migrating a page at a time from a multi page application to a SPA. Even that required ensuring that whichever page transitioned didn't have other people working on it, let alone the whole code base.

Again, I simply don't buy that you're going to be able to AI your way through such a radical transition on anything other than a trivial application with a small or tiny team.

zeroonetwothree an hour ago | parent | prev [-]

If you have 100s of devs working on the project it’s not possible to do a full rewrite in one go. So its to about clandestine but rather that there’s just no way to get it done regardless of how much AI superpowers you bring to bear.

reactordev 2 hours ago | parent | prev | next [-]

I’m going to add my perspective here as they seem to all be ganging up on you Simon.

He is right. The game has changed. We can now refactor using an agent and have it done by morning. The cost of architectural mistakes is minimal and if it gets out of hand, you refactor and take a nap anyway.

What’s interesting is now it’s about intent. The prompts and specs you write, the documents you keep that outline your intended solution, and you let the agent go. You do research. Agent does code. I’ve seen this at scale.

Teever 30 minutes ago | parent | prev [-]

Let's say I'm mildly convinced by your argument. I've read your blog post that was popular on HN a week or so ago and I've made similar little toy programs with AI that scratch a particular niche.

Do you care to make my concrete predictions on when most developers will embrace this new normal as part of their day to day routine? One year? Five?

And how much of this is just another iteration in the wheel of recarnation[0]? Maybe we're looking at a future where we see return to the monoculture library dense supply chain that we use today but the libraries are made by swarms of AI agents instead and the programmer/user is responsible for guiding other AI agents to create business logic?

[0] https://www.computerhope.com/jargon/w/wor.htm

wanderlust123 3 hours ago | parent | prev [-]

Not all UIs converge to a React like requirement. For a lot of use cases React is over-engineering but the profession just lacks the balls to use something simpler, like htmx for example.

zeroonetwothree an hour ago | parent | next [-]

Core react is fairly simple, I would have no problem using it for almost everything. The overengineering usually comes at a layer on top.

all_factz 2 hours ago | parent | prev [-]

Sure, and for those cases I’d rather tell the agent to use htmx instead of something hand-rolled.

qazxcvbnmlp 3 hours ago | parent | prev | next [-]

Without commenting if parent is right or wrong. (I suspect it is correct)

If its true, the market will soon reward it. Being able to competently write good code cheaper will be rewarded. People don't employ programmers because they care about them, they are employed to produce output. If someone can use llms to produce more output for less $$ they will quickly make the people that don't understand the technology less competitive in the workplace.

zx8080 2 hours ago | parent [-]

> more output for less $$

That's a trap: it's not obvious for those without experience in both business and engineering on how to estimate or later calculate this $$. The trap is in the cost of changes and fix budget when things will break. And things will break. Often. Also, the requirements will change often, that's normal (our world is not static). So the cost has some tendency to change (guess which direction). The thoughtless copy-paste and rewrite-everything approach is nice, but the cost goes up steep with time soon. Those who don't know it will be trapped dead and lose business.

tbrownaw an hour ago | parent [-]

Predicting costs may be tricky, but measuring them after the fact it's a fair bit easier.

brians 6 hours ago | parent | prev | next [-]

A major difference is when we have to read and understand it because of a bug. Perhaps the LLM can help us find it! But abstraction provides a mental scaffold

godelski 3 hours ago | parent [-]

I feel like "abstraction" is overloaded in many conversations.

Personally I love abstraction when it means "generalize these routines to a simple and elegant version". Even if it's harder to understand than a single instance it is worth the investment and gives far better understanding of the code and what it's doing.

But there's also abstraction meaning to make less understandable or more complex and I think LLMs operate this way. It takes a long time to understand code. Not because any single line of code is harder to understand but because they need to be understood in context.

I think part of this is in people misunderstanding elegance. It doesn't mean aesthetically pleasing, but to do something in a simple and efficient way. Yes, write it rough the first round but we should also strive for elegance. It more seems like we are just trying to get the first rough draft and move onto the next thing.

squigz 5 hours ago | parent | prev [-]

> Making these decisions effectively and responsibly is one of the key characteristics of a senior engineer, which is why it's so interesting that all of those years of intuition are being disrupted.

They're not being disrupted. This is exactly why some people don't trust LLMs to re-invent wheels. It doesn't matter if it can one-shot some code and tests - what matters is that some problems require experience to know what exactly is needed to solve that problem. Libraries enable this experience and knowledge to centralize.

When considering whether inventing something in-house is a good idea vs using a library, "up front dev cost" factors relatively little to me.

joquarky 5 hours ago | parent [-]

Don't forget to include supply chain attacks in your risk assessment.

cameronh90 2 hours ago | parent | prev | next [-]

Rather, the problem more often I see with junior devs is pulling in a dozen dependencies when writing a single function would have done the job.

Indeed, part of becoming a senior developer is learning why you should avoid left-pad but accept date-fns.

We’re still in the early stages of operationalising LLMs. This is like mobile apps in 2010 or SPA web dev in 2014. People are throwing a lot of stuff at the wall and there’s going be a ton of churn and chaos before we figure out how to use it and it settles down a bit. I used to joke that I didn’t like taking vacations because the entire front end stack will have been chucked out and replaced with something new by the time I get back, but it’s pretty stable now.

Also I find it odd you’d characterise the current LLM progress as somehow being below where we hoped it would be. A few years back, people would have said you were absolutely nuts if you’d have predicted how good these models would become. Very few people (apart from those trying to sell you something) were exclaiming we’d be imminently entering a world where you enter an idea and out comes a complex solution without any further guidance or refining. When the AI can do that, we can just tell it to improve itself in a loop and AGI is just some GPU cycles away. Most people still expect - and hope - that’s a little way off yet.

That doesn’t mean the relative cost of abstracting and inlining hasn’t changed dramatically or that these tools aren’t incredibly useful when you figure out how to hold them.

Or you could just do what most people always do and wait for the trailblazers to either get burnt or figure out what works, and then jump on the bandwagon when it stabilises - but accept that when it does stabilise, you’ll be a few years behind those who have been picking shrapnel out of their hands for the last few years.

whstl 7 hours ago | parent | prev | next [-]

> The reason we stop junior devs from going down this path is because experience teaches us that things will break and when they do, it will incur a world of pain.

Hyperbole. It's also very often a "world of pain" with a lot of senior code.

baq 7 hours ago | parent | prev | next [-]

> "LLM as abstraction" might be a possible future, but it assumes LLMs are significantly more capable than a junior dev at managing a growing mess of complex code.

Ignoring for a second they actually already are indeed, it doesn’t matter because the cost of rewriting the mess drops by an order of magnitude with each frontier model release. You won’t need good code because you’ll be throwing everything away all the time.

bspinner 7 hours ago | parent [-]

I've yet to understand this argument. If you replace a brown turd with a yellowish turd, it'll still be a turd.

PaulHoule 6 hours ago | parent [-]

In everyday life I am a plodding and practical programmer who has learned the hard way that any working code base has numerous “fences” in the Chesterton sense.

I think, though, that for small systems and small parts of systems LLMs do move the repair-replace line in the replace direction, especially if the tests are good.

bdangubic 7 hours ago | parent | prev | next [-]

> All of the people decrying "you're not doing it right!" are emphatically proving that LLMs cannot perform these tasks at the level we need them to.

the people are telling you “you are not doing it right!” - that’s it, there is nothing to interpret addition to this basic sentence

mannanj 7 hours ago | parent | prev | next [-]

> things will break and when they do, it will incur a world of pain

How much if this is still true and exaggerated in our world environment today where the cost of making things is near 0?

I think “Evolution” would say that the cost of producing is near 0 so the possibility of creating what we want is high. The cost of trying again is low so mistakes and pain aren’t super high. For really high stakes situation (which most situations are not) bring the expert human in the loop until the expert better than that human is AI.

neoromantique 7 hours ago | parent | prev [-]

I'm sorry, but I don't agree.

Current dependency hell that is modern development, just how wide the openings are for supply chain attacks and seemingly every other week we get a new RCE.

I'd rather 100 loosely coupled scripts peer reviewed by a half a dozen of LLM agents.

pca006132 6 hours ago | parent [-]

But this doesn't solve dependency hell. If the functionalities were loosely coupled, you can already vendor the code in and manually review them. If they are not, say it is a db, you still have to depend on that?

Or maybe you can use AI to vendor dependencies, review existing dependencies and updates. Never tried that, maybe that is better than the current approach, which is just trusting the upstream most of the time until something breaks.

joquarky 4 hours ago | parent [-]

Are you really going to manually review all of moment.js just to format a date?

pca006132 4 hours ago | parent [-]

By vendoring the code in, in this case I mean copying the related code into the project. You don't review everything. It is a bad way to deal with dependencies, but it feels similar to how people are using LLMs now for utility functions.

sshine 8 hours ago | parent | prev | next [-]

> I'm now incentivized to use less abstractions.

I'm incentivised to use abstractions that are harder to learn, but execute faster or more safely once compiled. E.g. more Rust, Lean.

> If an LLM is typing that code - and it can maintain a test suite that shows everything works correctly - maybe we don't need that abstraction after all.

LLMs benefit from abstractions the same way as we do.

LLMs currently copy our approaches to solving problems and copy all the problems those approaches bring.

Letting LLMs skip all the abstractions is about as likely to succeed as genetic programming is efficient.

For example, writing more vanilla JS instead of React, you're just reinventing the necessary abstractions more verbosely and with a higher risk of duplicate code or mismatching abstractions.

In a recent interview with Bret Weinstein, a former professor of evolutionary biology, he proposed that one property of evolution that makes the story of one species evolving into another more likely is that it's not just random permutations of single genes; it's also permutations to counter variables encoded as telomeres and possibly microsatellites.

https://podcasts.happyscribe.com/the-joe-rogan-experience/24...

Bret compares this to flipping random bits in a program to make it work better vs. tweaking variables randomly in a high-level language. Mutating parameters at a high-level for something that already works is more likely to result in something else that works than mutating parameters at a low level.

So I believe LLMs benefit from high abstractions, like us.

We just need good ones; and good ones for us might not be the same as good ones for LLMs.

simonw 8 hours ago | parent [-]

> For example, writing more vanilla JS instead of React, you're just reinventing the necessary abstractions more verbosely and with a higher risk of duplicate code or mismatching abstractions.

Right, but I'm also getting pages that load faster and don't require a build step, making them more convenient to hack on. I'm enjoying that trade-off a lot.

joquarky 4 hours ago | parent [-]

Vanilla JS is also a lot more capable than it was when React was invented.

And yeah, you can't beat the iteration speed.

I feel like there are dozens of us.

tyre 9 hours ago | parent | prev | next [-]

For moment you an use `date-fns` and tree shake.

I'd rather have LLMs build on top of proven, battle-tested production libraries than keep writing their own from scratch. You're going to fill up context with all of its re-invented wheels when it already knows how to use common options.

Not to mention that testing things like this is hard. And why waste time (and context and complexity) for humans and LLMs trying to do something hard like state syncing when you can focus on something else?

simonw 9 hours ago | parent [-]

Every dependency carries a cost. You are effectively outsourcing part of the future maintenance of your project to an external team.

This can often be a very solid bet, but it can also occasionally backfire if the library you chose falls out of date and is no longer maintained.

For this reason I lean towards fewer dependencies, and have a high bar for when a dependency is worth adding to a project.

I prefer a dozen well vetted dependencies to hundreds of smaller ones that each solve a problem that I could have solved effectively without them.

tyre 7 hours ago | parent [-]

For smol things like left-pad, sure but the two examples given (moment and react) solve really hard problems. If I were reviewing a PR where someone tried to re-implement time zone handling in JS, that’s not making it through review.

In JS, the DOM and time zones are some of the most messed up foundations you’re building on top of ime. (The DOM is amazing for documents but not designed for web apps.)

I think we really need to be careful about adding dependencies that we’re maintaining ourselves, especially when you factor in employee churn and existing options. Unless it’s the differentiator for the business you’re building, my advice to engineers is to strongly consider other options and have a case for why they don’t fit.

AI can play into the engineering blind spot of building it ourselves because it’s fun. But engineering as a discipline requires restraint.

simonw 7 hours ago | parent [-]

Whether that's true about React and Moment varies on a case-by-case basis.

If you're building something simple like a contact form React may not be the right choice. If you're building something like Trello that calculation is different.

Likewise, I wouldn't want Moment for https://tools.simonwillison.net/california-clock-change but I might want it for something that needs its more advanced features.

rdhatt 5 hours ago | parent | prev | next [-]

I find it interesting for your example you chose Moment.js -- a time library instead of something utilitarian like Lodash. For years I've following Jon Skeet's blog about implementing his time library NodaTime (a port of JodaTime). There are a crazy number of edge cases and many unintuitive things about modeling time within a computer.

If I just wanted the equivalent of Lodash's _.intersection() method, I get it. The requirements are pretty straightforward and I can verify the LLM code & tests myself. One less dependency is great. But with time, I know I don't know enough to verify the LLM's output.

Similar to encryption libraries, it's a common recommendation to leave time-based code to developers who live and breathe those black boxes. I trust the community verify the correctness of those concepts, something I can't do myself with LLM output.

throwaway150 3 hours ago | parent | prev | next [-]

> If an LLM is typing that code - and it can maintain a test suite that shows everything works correctly - maybe we don't need that abstraction after all.

But this is a highly non-trivial problem. How do you even possibly manually verify that the test suite is complete and tests all possible corner cases (of which there are so many because synchronizing state is a hard problem)?

At least React solves this problem in a non-stochastic, deterministic manner. What can be a good reason to replace something like React that works determinstically with LLM-assisted code that is generated stochastically and there's no easy way to manually verify if the implementation or the test suite is correct and complete?

mlinhares 2 hours ago | parent [-]

You don't, same as for the "generate momentjs and use it". People now firmly believe they can use an LLM to build custom versions of these libraries and rewrite whole ecosystems out of nowhere because Claude said "here's the code".

I've come to realize fighting this is useless, people will do this, its going to create large fuck ups and there will be heaps of money to be made on the cleanup jobs.

pertymcpert 2 hours ago | parent [-]

There's going to be lots of fuck ups, but with frontier models improving so much there's also going to be lots of great things made. Horrible, soul crushing technical debt addressed because it was offloaded to models rather than spending a person's thought and sanity on it.

I think overall for engineering this is going to be a net positive.

nzoschke 7 hours ago | parent | prev | next [-]

Right there with you.

I'm instructing my agents to doing old school boring form POST, SSR templates, and vanilla JS / CSS.

I previously shifted away from this to abstractions because typing all the boilerplate was tedious.

But now that I'm not typing, the tedious but simple approach is great for the agent writing the code, and great for the the people doing code reviews.

majormajor 3 hours ago | parent | prev | next [-]

> If an LLM is typing that code - and it can maintain a test suite that shows everything works correctly - maybe we don't need that abstraction after all.

for simple stuff, sure, React was ALWAYS inefficient. Even Javascript/client-side logic is still overkill a lot of the times except for that pesky "user expectations" thing.

for anything codebase that's long-lived and complex, combinatorics tells us how it'll near-impossible to have good+fast test coverage on all that.

part of the reason people don't roll their own is because being able to assume that the library won't have major bugs leads to an incredible reduction in necessary test service, and generally people have found it a safe-enough assumption.

throwing that out and trying to just cover the necessary stuff instead - because you're also throwing out your ability to quickly recognize risky changes since you aren't familiar with all the code - has a high chance of painting you into messy corners.

"just hire a thousand low-skilled people and force them to write tests" had more problems as a hiring plan then just "people are expensive."

nkrisc 5 hours ago | parent | prev | next [-]

If LLMs are that capable, then why are AI companies selling access to them instead of using them to conquer markets?

tfirst 3 hours ago | parent | next [-]

The same question might be asked about ASML: if ASML EUV machines are so great, why does ASML sell them to TSMC instead of fabbing chips themselves? The reality is that firms specialize in certain areas, and may lose their comparative advantage when they move outside of their specialty.

lithocarpus 4 hours ago | parent | prev [-]

I would guess fear of losing market share and valuable data, as well as pressure to appear to be winning the AI race for the companies' own stock price.

i.e competition. If there were only one AI company, they would probably not release anything close to their most capable version to the public. ala Google pre-chatgpt.

tjr 4 hours ago | parent [-]

I’m not sure that really answers the question? Or perhaps my interpretation of the question is different.

If (say) the code generation technology of Anthropic is so good, why be in the business of selling access to AI systems? Why not instead conquer every other software industry overnight?

Have Claude churn out the best office application suite ever. Have Claude make the best operating system ever. Have Claude make the best photo editing software, music production software, 3D rendering software, DNA analysis software, banking software, etc.

Why be merely the best AI software company when you can be the best at all software everywhere for all time?

sod22 2 hours ago | parent [-]

Im waiting for people to realise that software products are much more than just lines of code.

Getting sick and tired of people talk about their productivity gains when not much is actually happening out there in terms of real value creation.

pertymcpert an hour ago | parent [-]

Just because you don't see it or refuse to believe people doesn't make you right and them liars. Maybe you're just wrong.

avaika 2 hours ago | parent | prev | next [-]

I don't trust LLM enough to handle the maintenance of all the abstraction buried in react / similar library. I caught some of the LLMs taking nasty shortcuts (e.g. removing test constraints or validations in order to make the test green). Multiple times. Which completely breaks trust.

And if I have to closely supervise every single change, I don't believe my development process will be any better. If not worse.

Let alone new engineers who join the team and all of a sudden have to deal with a unique solution layer which doesn't exist anywhere else.

travisgriggs 5 hours ago | parent | prev | next [-]

Has anyone tried the experiment that is sort of implied here? I was wondering earlier today, what it would be like to pick a simple app, pick on OS, and just tell an LLM to write that app using only machine code and native ADKs, and skip all intermediate layers?

We seem to have created a large bureaucracy for software development, where telling a computer how to execute an app involves keeping a lot of cogs in a big complicated machine happy. But why use the automation to just roll the cogs? Why not just simplify/streamline? Does an LLM need to worry about using the latest and greatest abstractions? I have to assume this has been tried already...

azangru 9 hours ago | parent | prev | next [-]

> Why do we code with React?

...is a loaded question, with a complex and nuanced answer. Especially when you continue:

> it's worth paying the React complexity/page-weight tax

All right; then why do we code in React when a smaller alternative, such as Preact, exists, which solves the same problem, but for a much lower page-weight tax?

Why do we code in React when a mechanism to synchronize data with tiny UI fragments through signals exists, as exemplified by Solid?

Why do people use React to code things where data doesn't even change, or changes so little that to sync it with the UI does not present any challenge whatsoever, such as blogs or landing pages?

I don't think the question 'why do we code with React?' has a simple and satisfactory answer anymore. I am sure marketing and educational practices play a large role in it.

simonw 9 hours ago | parent [-]

Yeah, I share all of those questions.

My cynical answer is that most web developers who learned their craftsin the last decade learned frontend React-first, and a lot of them genuinely don't have experience working without it.

Which means hiring for a React team is easier. Which means learning React makes you more employable.

whstl 7 hours ago | parent [-]

> most web developers who learned their craftsin the last decade learned frontend React-first, and a lot of them genuinely don't have experience working without it

That's not cynical, that's the reality.

I do a lot of interviews and mentor juniors, and I can 100% confirm that.

And funny enough, React-only devs was a bigger problem 5 years ago.

Today the problem is developers who can *only* use Next.js. A lot can't use Vite+React or plain React, or whatever.

And about 50% of Ruby developers I interviewed from 2022-2024 were unable to code a FizzBuzz in Ruby without launching a whole Rails project.

CharlieDigital 6 hours ago | parent | next [-]

My test for FE is to write a floating menu in JSFiddle with only JS, CSS, and HTML. Bonus if no JS.

If you can do that, then you can probably understand how everything else works.

whstl 5 hours ago | parent [-]

Yep, that's a good test. And it's good even if it's for a React only position.

azangru 6 hours ago | parent | prev [-]

>> a lot of them genuinely don't have experience working without [react]

> Today the problem is developers who can only use Next.js. A lot can't use Vite+React or plain React, or whatever.

Do you want to hire such developers?

whstl 6 hours ago | parent [-]

No, that's why I said "problem".

My job during the hiring process is to filter them.

But that's me. Other companies might be interested.

I often choose to work on non-cookie-cutter products, so it's better to have developers with more curiosity to ask questions, like yourself asked above.

casualscience 6 hours ago | parent | prev | next [-]

If you work at a megacorp right now, you know whats happening isn't people deciding to use less libraries. It's developers being measured by their lines of code, and the more AI you use the more lines of code and 'features' you can ship.

However, the quality of this code is fucking terrible, no one is reading what they push deeply, and these models don't have enough 'sense' to make really robust and effective test suites. Even if they did, a comprehensive test suite is not the solution to poorly designed code, it's a band aid -- and an expensive one at scale.

Most likely we will see some disasters happening in the next few years due to this mode of software development, and only then will people understand to use these agents as tools and not replacements.

...Or maybe we'll get AGI and it will fix/maintain the trash going out there today.

losvedir 4 hours ago | parent | prev | next [-]

Huh, I've been assuming the opposite: better to use React even if you don't need it, because of its prevalence in the training data. Is it not the case that LLMs are better at standard stacks like that than custom JS?

simonw 3 hours ago | parent [-]

Hard to say for sure. I've been finding that frontier LLMs write very good code when I tell them "vanilla JS, no React" - in that their code matches my personal taste at least - but that's hardly a robust benchmark.

godelski 3 hours ago | parent | prev | next [-]

  > I'm now incentivized to use less abstractions.
I'd argue it's a different category of abstraction
jayd16 7 hours ago | parent | prev | next [-]

Why would I want to maintain in perpetuity random snippets when a library exists? How is that an improvement?

simonw 7 hours ago | parent [-]

It's an improvement if that library stops being actively maintained in the future.

... or decides to redesign the API you were using.

skylurk 4 hours ago | parent [-]

Are you referring to httpx? ;)

starkparker 8 hours ago | parent | prev | next [-]

I'd rather use React than a bespoke solution created by an ephemeral agent, and I'd rather self-trepanate than use React

api 4 hours ago | parent | prev | next [-]

Nutty idea: train on ASM code. Create an LLM that compiles prompts directly to machine code.

cyberax 4 hours ago | parent | prev | next [-]

The problem is, what do you do _when_ it fails? Not "if", but "when".

Can you manually wade through thousands of functions and fix the issue?

akoboldfrying 6 hours ago | parent | prev | next [-]

> and it can maintain a test suite that shows everything works correctly

Are you able to efficiently verify that the test suite is testing what it should be testing? (I would not count "manually reviewing all the test code" as efficient if you have a similar amount of test code to actual code.)

Sometimes a change to the code under test means that a (perhaps unavoidably brittle) test needs to be changed. In this case, the LLM should change the test to match the behaviour of the code under test. Other times, a change to the code under test represents a bug that a failing test should catch -- in this case, the LLM should fix the code under test, and leave the test unchanged. How do you have confidence that the LLM chooses the right path in each case?

oulipo2 6 hours ago | parent | prev [-]

That's a fundamental misunderstanding

The role of abstractions *IS* to prevent (eg "compress") the need for a test suite, because you have an easy model to understand and reason about

simonw 6 hours ago | parent [-]

One of my personal rules for automated test suites is that my tests should fail if one of the libraries I'm using changes in a way that breaks my features.

Makes upgrading dependencies so much less painful!

kace91 9 hours ago | parent | prev | next [-]

Our industry wants disruption, speed, delivery! Automatic code generation does that wonderfully.

If we wanted safety, stability, performance, and polish, the impact of LLMs would be more limited. They have a tendency to pile up code on top of code.

I think the new tech is just accelerating an already existing problem. Most tech products are already rotting, take a look at windows or iOS.

I wonder what will it take for a significant turning point in this mentality.

ip26 4 hours ago | parent | next [-]

One possible positive outcome of all this could be sending LLMs to clean up oceans of low value tech debt. Let the humans move fast, let the machines straighten out and tidy up.

The ROI of doing this is weak because of how long it takes an expensive human. But if you could clean it up more cheaply, the ROI strengthens considerably- and there’s a lot of it.

rgreeko42 9 hours ago | parent | prev [-]

disruption is a code word for deregulation, and deregulation is bad for everyone except execs and investors

rambojohnson 6 hours ago | parent [-]

it's sadly telling how this comment got greyed out to oblivion.

Q6T46nT668w6i3m 10 hours ago | parent | prev | next [-]

It’s wild that programmers are willing to accept less determinism.

viraptor 9 hours ago | parent | next [-]

It's not something that suddenly changed. "I'll generate some code" is as nondeterministic as "I'll look for a library that does it", "I'll assign John to code this feature", or "I'll outsource this code to a consulting company". Even if you write yourself, you're pretty nondeterministic in your results - you're not going to write exactly the same code to solve a problem, even if you explicitly try.

Night_Thastus 2 hours ago | parent | next [-]

No?

If I use a library, I know it will do the same thing from the same inputs, every time. If I don't understand something about its behavior, then I can look to the documentation. Some are better about this, some are crap. But a good library will continuing doing what I want years or decades later.

An LLM can't decide between one sentence and the next what to do.

viraptor an hour ago | parent [-]

The library is deterministic, but looking for the library isn't. In the same way that generating code is not deterministic, but the generated code normally is.

leshow 3 hours ago | parent | prev | next [-]

It's not the same, LLM's are qualitatively different due to the stochastic and non-reproducible nature of their output. From the LLM's point of view, non-functional or incorrect code is exactly the same as correct code because it doesn't understand anything that it's generating. When a human does it, you can say they did a bad or good job, but there is a thought process and actual "intelligence" and reasoning that went into the decisions.

I think this insight was really the thing that made me understand the limitations of LLMs a lot better. Some people say when it produces things that are incorrect or fabricated it is "hallucinating", but the truth is that everything it produces is a hallucination, and the fact it's sometimes correct is incidental.

viraptor an hour ago | parent | next [-]

I'm not sure who generates random code without a goal or checking if it works afterwards. Smells like a straw man. Normally you set the rules, you know how to validate if the result works, and you may even generate tests that keep that state. If I got completely random results rather than what I expect, I wouldn't be using that system - but it's correct and helpful almost every time. What you describe is just not how people work with LLMs in practice.

sod22 2 hours ago | parent | prev [-]

Correct. The thing has no concept of true or false. 0 or 1.

Therefore it cannot necessarily discern between two statements that are practically identical in the eyes of humans. This doesnt make the technology useless but its clearly not some AGI nonsense.

skydhash 9 hours ago | parent | prev [-]

Contrary to code generation, all the other examples have one common point which is the main advantage, which is the alignment between your objective and their actions. With a good enough incentive, they may as well be deterministic.

When you order home delivery, you don’t care about by who and how. Only the end result matters. And we’ve ensured that reliability is good enough that failures are accidents, not common occurrence.

Code generation is not reliable enough to have the same quasi deterministic label.

bryanrasmussen 9 hours ago | parent | prev | next [-]

It's wild that management would be willing to accept it.

I think that for some people it is harder to reason about determinism because it is similar to correctness, and correctness can, in many scenarios be something you trade off - for example in relation to scaling and speed you will often trade off correctness.

If you do not think clearly about the difference with determinism and other similar properties like (real-time) correctness which you might be willing to trade off, you might think that trading off determinism is just more of the same.

Note: I'm against trading off determinism, but I am willing to think there might be a reason to trade it off, just I worry that people are not actually thinking through what it is they're trading when they do it.

layer8 7 hours ago | parent | next [-]

Management is used to nondeterminism, because that’s what their employees always have been.

skydhash 9 hours ago | parent | prev [-]

Determinism require formality (enactment of rules) and some kind of omniscience about the system. Both are hard to acquire. I’ve seen people trying hard not to read any kind of manual and failing to reason logically even when given hints about the solution to a problem.

whstl 7 hours ago | parent | prev | next [-]

Why would the average programmer have a problem with it?

The average programmer is already being pushed into doing a lot of things they're unhappy about in their day jobs.

Crappy designs, stupid products, tracking, privacy violation, security issues, slowness on customer machines, terrible tooling, crappy dependencies, horrible culture, pointless nitpicks in code reviews.

Half of HN is gonna defend one thing above or the other because $$$.

What's one more thing?

sod22 2 hours ago | parent [-]

Say it louder.

tmaly 8 hours ago | parent | prev | next [-]

I think those that are most successful at creating maintainable code with AI are those that spend more time upfront limiting the nondeterminism aspect using design and context.

lopatin 6 hours ago | parent | prev | next [-]

It's not that wild. I like building things. I like programming too, but less than building things.

Trasmatta 5 hours ago | parent [-]

To me, fighting with an LLM doesn't feel like building things, it feels like having my teeth pulled.

i_am_a_peasant 4 hours ago | parent [-]

I am still using LLMs just to ask questions and never giving them the keyboard so I haven’t quite experienced this yet. It has not made me a 10x dev but at times it has made me a 2x dev, and that’s quite enough for me.

It’s like jacking off, once in a while won’t hurt and may even be beneficial. But if you do it constantly you’re gonna have a problem.

givemeethekeys 9 hours ago | parent | prev | next [-]

Mortgages don't pay for themselves.

wiseowise 9 hours ago | parent | prev | next [-]

> It’s wild that programmers are willing to accept less determinism.

It's wild that you think programmers is some kind of caste that makes any decisions.

Der_Einzige 9 hours ago | parent | prev | next [-]

You can have the best of both worlds if you use structured/constrained generation.

dahcryn 10 hours ago | parent | prev | next [-]

The good ones don't accept. Sadly there's just many more idiots out there trying to make a quick buck

lazystar 9 hours ago | parent [-]

Delving a bit deeper... I've been wondering if the problem's related to the rise in H1B workers and contractors. These programmers have an extra incentive to avoid pushing back on c-suite/skip level decisions - staying out of in-office politics reduces the risk of deportation. I think companies with a higher % of engineers working with that incentive have a higher risk of losing market share in the long-term.

doug_durham 7 hours ago | parent [-]

I’ll answer that with a simple “No”. My H1B colleges are every bit as rigorous and innovative as any engineer. It is in no one’s long term interest to generate shoddy code.

lazystar 6 hours ago | parent [-]

I'm not stating the code is shoddy - I agree the quality's fine. I'm referring to the IC engineer's role in pushing back against unrealistic demands/design decisions that are passed down by the PM's and c-suite teams. Doing this can increase internal tension, but it makes the product and customer experience better in the long run. In my career, I've felt safe pushing back because I don't have to worry about moving if my pushback is poorly received.

zephen 9 hours ago | parent | prev | next [-]

There has always been a laissez-faire subset of programmers who thrive on living in the debugger, getting occasional dopamine hits every time they remove any footgun they previously placed.

I cannot count the times that I've had essentially this conversation:

"If x happens, then y, and z, it will crash here."

"What are the odds of that happening?"

"If you can even ask that question, the probability that it will occur at a customer site somewhere sometime approaches one."

It's completely crazy. I've had variants on the conversation from hardware designers, too. One time, I was asked to torture a UART, since we had shipped a broken one. (I normally build stuff, but I am your go-to whitebox tester, because I hone in on things that look suspicious rather than shying away from them.) When I was asked the inevitable "Could that really happen in a customer system?" after creating a synthetic scenario where the UART and DMA together failed, my response was:

"I don't know. You have two choices. Either fix it where the test passes, or prove that no customer could ever inadvertently recreate the test conditions."

He fixed it, but not without a lot of grumbling.

Verdex 9 hours ago | parent [-]

My dad worked in the auto industry and they came across a defect in an engine control computer where they were able to give it something like 10 million to one odds of triggering.

They then turned the thing on, it ran for several seconds, encountered the error, and crashed.

Oh, that's right, the CPU can do millions of things a second.

Something I keep in the back of my mind when thinking about the odds in programming. You need to do extra leg work to make sure that you're measuring things in a way that's practical.

contravariant 9 hours ago | parent | prev | next [-]

I mean we've had to cope with users for ages, this is not that different.

baq 7 hours ago | parent | prev [-]

This gets repeated all the time, but it’s total nonsense. The output of an LLM is fixed just as the output of a human is.

exssss 6 hours ago | parent | prev | next [-]

Out of curiosity, what did you pivot to?

It sounds crazy to say this, but I've been thinking about this myself. Not for the immediate future (eg 2026), but somewhere later.

teleforce 5 hours ago | parent | prev | next [-]

This whole things of AI assisted and vibe coding phenomena including the other comments remind me of this very popular post on HN that keep appearing almost every year on HN [1],[2].

[1] Don't Call Yourself A Programmer, And Other Career Advice:

https://www.kalzumeus.com/2011/10/28/dont-call-yourself-a-pr...

[2] Don't Call Yourself A Programmer, And Other Career Advice (2011):

https://news.ycombinator.com/item?id=34095775

scellus 9 hours ago | parent | prev | next [-]

My work is better than it has been for decades. Now I can finally think and experiment instead of wasting my time on coding nitty-gritty detail, impossible to abstract. Last autumn was the game changer, basically Codex and later Opus 4.5; the latter is good with any decent scaffolding.

chasd00 9 hours ago | parent [-]

I have to admit, LLMs do save a lot of typing a d associated syntax errors. If you know what you want and can spot and fix mistakes made by the LLM then they can be pretty useful. I don’t think it’s wise to use them for development if you are not knowledgeable enough in the domain and language to recognize errors or dead ends in the generated code though.

zx8080 2 hours ago | parent | prev | next [-]

That's similar to what happened in Java enterprise stack: ...wrapper and ...factory classes and all-you-can-eat abstractions that hide implementation and make engineering crazy expensive while not adding much (or anything, in most cases) to product quality. Now the same is happening in work processes with agentic systems and workflows.

jsk2600 10 hours ago | parent | prev | next [-]

What are you pivoting to?

coldpie 8 hours ago | parent [-]

I'm also interested in hearing this.

For me, I'm planning to ride out this industry for another couple years building cash until I can't stand it, then pivot to driving a city bus.

baq 7 hours ago | parent | next [-]

Gardening and plumbing. Driving buses will be solved.

Buttons840 3 hours ago | parent [-]

Plumbing seems like a relatively popular AI-proof pivot. If AI really does start taking jobs en masse, then plumbers are going to be plentiful and cheap.

What we really need is a lot more housing. So construction work is a safer pivot. But, construction work is difficult and dangerous and not something everyone can do. Also, society will collapse (apparently) if we ever make housing affordable, so maybe the powers-that-be wont allow an increase in construction work, even if there are plenty of construction workers.

Who knows... interesting times.

layer8 7 hours ago | parent | prev [-]

> then pivot to driving a city bus.

You seem to be counting on Waymo not obsoleting that occupation. ;)

kayo_20211030 8 hours ago | parent | prev | next [-]

Could we all just agree to stop using the term "abstraction". It's meaningless and confusing. It's cover for a multitude of sins, because it really could mean anything at all. Don't lay all the blame on the c-suite; they are what they are, and have their own view. Don't moan about the latest egregious excess of some llm. If it works for you, use it; if it doesn't, don't. But, stop whinging.

aleph_minus_one 5 hours ago | parent | prev | next [-]

> It’s watching the profession collectively decide that the solution to uncertainty is to pile abstraction on top of abstraction until no one can explain what’s actually happening anymore.

No profession collectively made such a decision. Programming was always very splitted into many, many subcultures, each with their own (mutually incompatible over the whole profession) ideas what makes a good program.

So, I guess rather some programmers inside some part of a Silicon Valley echo chamber in which you also live made such a decision.

godelski 3 hours ago | parent | prev | next [-]

  > the solution to uncertainty is to pile abstraction on top of abstraction until no one can explain what’s actually happening anymore.
I've usually found complaints about abstraction in programming odd because frankly, all we do is abstraction. It often seems to be used to mean /I/ don't understand, therefore we should do something more complicated and with many more lines of code that's less flexible.

But this usage? I'm fully on board. Too much abstraction is when it's incomprehensible. To who is the next question (my usual complaint is that a junior should not be that level) and I think you're right to point out that the "who" here is everyone.

We're killing a whole side of creativity and elegance while only slightly aiding another side. There's utility to this, but also a cost.

I think what frustrates me most about CS is that as a community we tend to go all in on something. We went all in on VR then crypto, and now AI. We should be trying new things but it more feels like we take these sides as if they're objective and anyone not hopping on the hype train is an idiots or luddite. The way the whole industry jumps to these things just feels more like FOMO than intelligent strategy. Like making a sparkling water company an "AI first" company... its like we love solutions looking for problems

akulbe 5 hours ago | parent | prev | next [-]

What are you pivoting to?

christophilus 8 hours ago | parent | prev | next [-]

What are you pivoting to?

dandanua 7 hours ago | parent | prev | next [-]

Don't forget you are expected to deliver x10 for the same pay, "because you have the AI now".

baq 7 hours ago | parent [-]

The system is designed to do exactly that. This is called ‘productivity increase’ and is deflationary in large dosages. Deflation sounds good until you understand where it’s coming from.

lo_zamoyski 9 hours ago | parent | prev | next [-]

> It’s watching the profession collectively decide that the solution to uncertainty is to pile abstraction on top of abstraction until no one can explain what’s actually happening anymore.

The ubiquitous adoption of LLMs for generating code is mostly a sign of bad abstraction or the absence of abstraction, not the excess of abstraction.

And choosing/making the right abstraction is kind of the name of the game, right? So it's not abstraction per se that's a problem.

AndrewKemendo 9 hours ago | parent | prev | next [-]

Every technical person has been complaining about this for the entire history of computer programming

Unless you’re writing literal memory instructions then you’re operating on between 4 and 10 levels of abstraction already as an engineer

It has never been tractable for humans to program a series of switches without incredible number of abstractions

The vast majority of programmers never understood how computers work to begin with

Trasmatta 5 hours ago | parent | next [-]

People keep making this argument, but the jump to LLM driven development is such a conceptually different thing than any previous abstraction

fwip 3 hours ago | parent | prev | next [-]

And if you're writing machine code directly, you're still relying on about ten layers of abstraction that the wizards at the chip design firms have built for you.

casey2 8 hours ago | parent | prev [-]

This is true, though the people that actually push the field forward do know enough about every level of abstraction to get the job done. Making something (very important) horrible just to rush to market can be a pretty big progress blocker.

Jensen is someone I trust to understand the business side and some of those lower technical layers, so I'm not too concerned.

casey2 8 hours ago | parent | prev [-]

So you're washing dishes now?

robotresearcher 7 hours ago | parent | prev | next [-]

Andrej is 39 years old, according to Wikipedia.

Douglas Adams on age and relating to technology:

"1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.

2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.

3. Anything invented after you’re thirty-five is against the natural order of things."

From 'The Salmon of Doubt' (2002)

aoeusnth1 6 hours ago | parent | next [-]

Is that really what he's saying here?

He's not against the technology, I think he's just feeling like there's a lot of potential that he's not quite grasping yet.

BearOso 6 hours ago | parent [-]

This guy is one of the top names in AI. This is pure propaganda written to instill "fear of missing out" and encouraging people to buy into his platform, lest they become "obsolete."

PaulHoule 5 hours ago | parent | next [-]

It’s a little shocking to me that this sentiment hasn’t floated higher in the discussion. Regardless of how he feels, this is the way he wants you to feel.

Big picture it’s about emotional intelligence and if you are losing your shit you’re going to flail around. I think you should pick up some near-frontier tools and use them to improve your usual process, always keeping your feet on the ground. “Vibe coding” was always about getting you and keeping you over your head. Resist it!

gsf_emergency_6 5 hours ago | parent [-]

vive vibe live or it doesnt matter?

Maybe Devs should handle copilots as Swiss prana-bindu their shots

(Therefore gun laws at a longer timescale)

Of course we have to ask aeb if he has ever run into someone who trips (only, of course) while hunting ;) have you?

8note an hour ago | parent | prev | next [-]

on the other hand, it does currently feel like when angular and react were starting to come out, and there was a billion different javascript libraries to learn with a new one coming out every couple weeks, and you arent quite sure what you should spend your time on and how much, vs now where you just learn react, and maybe extend to next.js

LLM forward development has a lot of things going on, and it really isn't clear yet what is going be the common standard in a few years time in terms of dev ux, async tools, ci/cd tools, in production and offline workflows, etc.

its an easy time to hop down a wrong path picking subpar tools or not experimenting further, but if you just wait, the people who try the right tools are going to be way ahead on making products for their customers.

neilv 3 hours ago | parent | prev | next [-]

Exactly. I think some of the commenters were unaware of some of the context, and got an entirely different read on the piece.

sailingparrot 4 hours ago | parent | prev [-]

Uncharitable take. His last public stance on this a few months ago when he released nanochat was that he didn’t use coding LLM for it, even though he tried, because they were not good enough and he was just losing time, so coded everything manually. Andrej is already set for life, and has moved into education where most of what he does is released for free.

ilaksh 2 hours ago | parent | prev | next [-]

You didn't really read what he wrote or think about it and just took it as an opportunity to dismiss him as old. He was just being humble. It's relatively new to everyone. At least you are honest about your ageism.

I am sure Karparthy can and does everage AI as well or better than you. Probably I do also and I am 48.

BonoboIO 2 hours ago | parent | prev [-]

This is pretty much the thinking across all German-speaking countries. It especially applies to anything related to energy (combustion engines, coal, gas, oil) and IT.

Case in point: fax machines are still an important part of business communication in Germany, and many IT projects are genuinely amateurish garbage — because the underlying mindset is "everything should stay exactly as it is."

This is particularly visible in the 45+ generation. It mostly doesn't apply to programmers, since they tend to find new things interesting. But in the rest of society, the effects are painful to watch: if nothing changes, nothing improves.

And then there's mobile infrastructure. It's not even a technical problem — it's purely political. The networks simply don't get expanded. It's honestly embarrassing how far behind Germany is compared to the rest of Europe.

halfmatthalfcat 9 hours ago | parent | prev | next [-]

Wow - can we coin "Slopbrain" for people who are so far gone into AI eventualism that they can no longer function? Liked "cooked" but "slopped" or something. Good grief lol. Talk about getting lost in the sauce...

roadside_picnic 8 hours ago | parent | next [-]

WSJ has been writing increasingly about "AI Psychosis" (here's their most recent piece [0]).

I'm increasingly seeing that this is the real threat of AI. I've personally known people who have started to strain relationships with friends and family because they sincerely believe they are evolving into something new. While not as dramatic, the normalization of the use of "AI as therapist" is equally concerning. I know tons of people that rely on LLMs to guide them in difficult family decisions, career decisions, etc on an almost daily basis. If I'm honest, I myself have had times where I've leaned into this too much. I've also had times where AI starts telling me how clever I am, but thankfully a lifetime of low self worth signals warning flags in my brain when I hear this stuff! For most people, there is real temptation to buy into the praise.

Seeing Karpathy claim he can't keep up was shocking. It also immediately raises the question to anyone with a clear head: "Wait, if even Karpathy cannot use these tools effectively... just what is so useful about AI?" Isn't the entire point of AI that I can merely describe my problem and have a solution in a fraction of the time.

The fact that so many true believers in AI seem to forever be just a few more tricks away from really unleashing this power, starts to make it feel very much like magical thinking on a huge scale.

The real danger of AI is that we're entering into an era of mass hallucination across multiple fields and areas of human activity.

0. https://www.wsj.com/tech/ai/ai-chatbot-psychosis-link-1abf9d...

tyre 7 hours ago | parent | next [-]

> I've personally known people who have started to strain relationships with friends and family because they sincerely believe they are evolving into something new.

Cryptoboys did it first, please recognize their innovation ty

lukev 6 hours ago | parent | prev | next [-]

That's NOT AI psychosis, which is real, and which I've seen close-up.

AI psychosis is getting lost in the sauce and becoming too intimate with your ChatGPT instance, or believing it's something it's not.

Skepticism, or a fear of being outside the core loop is the exact opposite, and that's what Karpathy is talking about here. If anything, this kind of post is an indicator that you're absolutely NOT in AI psychosis.

tom_ 3 hours ago | parent [-]

"the core loop"? What is this?

bentobean 2 hours ago | parent | prev | next [-]

I would really like to hear more about these acquaintances who think they are evolving.

sho_hn 8 hours ago | parent | prev | next [-]

Cyberpunk was right!

timcobb 4 hours ago | parent | prev [-]

WSJ is Fox News Platinum, I wouldn't overthink it

johnfn 8 hours ago | parent | prev | next [-]

I feel Karpathy is smart enough to deserve a less dismissive response than this.

halfmatthalfcat 8 hours ago | parent | next [-]

A mix of "too clever by half" and "never meet your heroes".

rideontime 8 hours ago | parent | prev | next [-]

Why do you feel that way?

techblueberry 8 hours ago | parent | prev [-]

You think we should appeal to authority rather than address the ideas on their own merits?

johnfn 7 hours ago | parent [-]

How is saying the author has “slopbrain” is “addressing the idea on its own merits”? It’s just name calling.

halfmatthalfcat 7 hours ago | parent [-]

They aren't addressing my comment (which is obviously an overreaction to the tweet), he's asking you why we should appeal to authority rather than evaluate whether Karpathy is completely overreacting and in way too deep.

johnfn 7 hours ago | parent [-]

The intent of my comment was to state that you should write something more substantive than dismissing Karpathy as “slopbrain”. I wasn’t appealing to authority by saying that he was correct — just that he deserves more than name calling in a response.

halfmatthalfcat 6 hours ago | parent [-]

Evidently by "LLM/AI psychosis" coming into the mainstream zeitgeist, "slopbrain" isn't too far off.

johnfn 6 hours ago | parent [-]

Now you're just saying "AI psychosis exists" (true) and then saying Karpathy has it. That is, again, essentially name calling, like saying someone is insane rather than addressing their points.

If you really think Karpathy is psychotic you should explain why, but I don't think anything in the Tweet suggests that. My read of his tweet is that there is a lot of churn and new concepts in the software engineering industry, and that doesn't seem like a very psychotic thing to say.

throwatdem12311 6 hours ago | parent | prev | next [-]

I call it being "oneshot" by the AI.

Starlevel004 3 hours ago | parent | prev | next [-]

We could call it "Hacker News syndrome"

dvrp 8 hours ago | parent | prev | next [-]

Twitter folks call this LLM or AI Psychosis.

calf 5 hours ago | parent | prev [-]

Slopbrain is interesting because Karpathy's fallacious argumentation mirrors the glib argument of an LLM/AI, it's like cognitively recursive, one feeding the other in a self-selecting manner.

flumpcakes 8 hours ago | parent | prev | next [-]

> There's a new programmable layer of abstraction to master (in addition to the usual layers below) involving agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and ...

This sounds unbearable. It doesn't sound like software development, it sounds like spending a thousand hours tinkering with your vim config. It reminds me of the insane patchwork of sprawl you often get in DevOps - but now brought to your local machine.

I honestly don't see the upside, or how it's supposed to make any programmer worth their weight in salt 10x better.

globnomulous 7 hours ago | parent | next [-]

> This sounds unbearable.

I can't see the original post because my browser settings break Twitter (I also haven't liked much of Karpathy's output), but I agree. I call this style of software development 'meeting-based programming,' because that seems to be the mental model that the designers of the tools are pursuing. This probably explains, in part, why c-suite/MBA types are so excited about the tools: meetings are how they think and work.

In a way LLMs/chatbots and 'agents' are just the latest phase of a trend that the internet has been encouraging for decades: the elimination of mental privacy. I don't mean 'privacy' in an everyday sense -- i.e. things I keep to myself and don't share. I mean 'privacy' in a more basic sense: private experience -- sitting by oneself; having a mental space that doesn't include anybody else; simply spending time with one's own thoughts.

The internet encourages us to direct our thoughts and questions outward: look things up; find out what others have said; go to wikipedia; etc. This is, I think, horribly corrosive to the very essence of being a thinking, sentient being. It's also unsurprising, I guess. Humans are social animals. We're going to find ourselves easily seduced by anything that lets us replace private experience with social experience. I suppose it was only a matter of time until someone did this with programming tools, too.

ewoodrich 6 hours ago | parent | next [-]

https://xcancel.com/karpathy/status/2004607146781278521

(FYI: you can easily bypass the awful logged out view by replacing x.com with xcancel.com, I use a URL Autoredirector rule to do it automatically in Chromium browsers)

reconnecting 6 hours ago | parent [-]

Awesome hint!

ctmnt 5 hours ago | parent | prev [-]

Use a Nitter mirror [1]. I find xcancel.com the easiest to get to:

https://xcancel.com/karpathy/status/2004607146781278521

[1] https://github.com/zedeus/nitter/wiki/Instances

wakeywakeywakey 8 hours ago | parent | prev | next [-]

> ... or how it's supposed to make any programmer worth their weight in salt 10x better.

It doesn't. The only people I've seen claim such speedups are either not generally fluent in programming or stand to benefit financially from reinforcing this meme.

alexjplant 8 hours ago | parent | next [-]

For every conspicuous vibecoding influencer there are a bunch of experienced software engineers using them to get things done. The newest generation of models are actually pretty decent at following instructions and using existing code as a template. Building line-of-business apps is much quicker with Claude Code because once you've nicely scaffolded everything you can just tell it to build stuff and it'll do so the same way you would have in a fraction of the time. You can also use it to research alternatives to architectural approaches and tooling that you come up with so that you don't paint yourself into a corner by having not heard about some semi-niche tool that fits your use case perfectly.

Of course I wouldn't use an LLM to #yolo some Next.js monstrosity with a flavor-of-the-week ORM and random Tailwind. I have, however, had it build numerous parts of my apps after telling it all about the mise targets and tests and architecture of the code that I came up with up front. In a way it vindicates my approach to software engineering because it's able to use the tools available to it to (reasonably) ensure correctness before it says it's done.

johnfn 8 hours ago | parent | prev | next [-]

I am a professional engineer with around 10 years of experience and I use AI to work about 5x faster on a site I personally maintain (~100 DAU, so not huge, but also not nothing). I don’t work in AI so I get no financial benefit by “reinforcing this meme”.

danpalmer 3 hours ago | parent | next [-]

Same position, different results. I'm maybe 20% faster. Writing the code is rarely the bottleneck for me, so there's limited potential in that way. When I am writing the code, things that I'd find easy and fast are a little faster (or I can leave AI doing them). Things that are hard and slow are nearly as hard and nearly as slow when using AI, I still need to maintain most of the code in my head that I'd need to without AI, because it'll get things wrong so quickly.

I think what you're working on has a huge impact on AI's usability. If you're working on things that are simple conceptually and simple to implement, AI will do very well (including handling edge cases). If it's a hard concept, but simple execution, you can use AI to only do the execution and still get a pretty good speed boost, but not transformational. If it's a hard concept and a hard execution (as my latest project has been), then AI is really just not very good at it.

leshow 8 hours ago | parent | prev [-]

Oh, well if it can generate some simple code for your personal website, surely it can also be the "next level of abstraction" for the entirety of software engineering.

johnfn 8 hours ago | parent [-]

Well, I don’t really think it’s “simple”. The code uses React, nodejs, realtime events pushed via SSE, infra pushed via Terraform, postgres, blob store on S3, emails send with SES… sure, it’s not the next Google, but it’s a bit above, like, a personal blog.

And in any case, you are moving goalposts. OP said he had never seen anyone serious claim that they got productivity gains from AI. When I claim that, you say “well it’s not the next level of abstraction for all SWE”. Obviously - I never claimed that?

leshow 2 hours ago | parent [-]

If you want my opinion, I think LLMs can be pretty good at generating simple code for things you can find on stackoverflow and require minor adjustments. Even then, if you don't really understand the code you can have major issues.

Your site is case in point of why LLMs demo well but kind of fall apart in the real world. It's pretty good at fitting lego blocks together based on a ton of work other people have put into React and node or the SSE library you used, etc. But that's not what Karpathy is saying, he's saying "the hottest programming language is english".

That's bonkers. In my experience it can actually slow you down as much as speed you up, and when you try to do more complicated things it falls apart.

packetlost 7 hours ago | parent | prev [-]

Our ops guy has thrown together several buggy dashboards using AI tools. They're passable but impossible to maintain.

flumpcakes 6 hours ago | parent [-]

I personally think that everyone knows AI produces subpar code, and that the infallible humans are just passing it along because they don't understand/care. We're starting to see the gaslighting now, it's not that AI makes you better, it's that AI makes you ship faster, and now shipping faster (with more bugs) is more important because "tech debt is an appreciating asset" in the world where AI tools can pump out features 10x faster (with the commensurate bugs/issues). We're entering the era of "move fast and break stuff" on steroids. I miss the era of software that worked.

psidium 2 hours ago | parent [-]

Yep, bugs are already just another cost of doing business for companies that aren’t user-focused. We can expect buggier code from now on. Especially for software where the users aren’t the ones buying it.

Disclaimer because I sound pessimistic: I do use a lot of AI to write code.

I do feel behind on the usage of it.

qudat 7 hours ago | parent | prev | next [-]

As far as I can tell as a heavy coding agent user: you don’t need to know any of this and that’s a testament to how good code agent TUIs have become. All I do to be productive with a coding agent is tell it to break a problem down into tasks, store it inside beads, and then make sure each step is approved by me. I also add in a TDD requirement where it needs to build tests that fail then eventually pass.

Everything else I’ve used has been over engineered and far less impactful. What I just said above is already what many of us do anyway.

tehnub 3 hours ago | parent | next [-]

I predict by the end of next year we will have our AIs write TPS reports.

halfmatthalfcat 7 hours ago | parent | prev | next [-]

This sounds like my complete and utter nightmare. No art or finesse in building the thing - only an exercise in torturing language to someone who at a fundamental level doesn't understand a thing.

qudat an hour ago | parent | next [-]

I don’t really understand how you got that from my post. I can and do drop in to refactor or work on the interesting parts of a project. At every checkpoint where I require a review I can and do make medications by hand.

Are you complaining about code formatters or auto fix linters? What about codegen based on APIs specs? A code agent can do all of those and more. It can do all the boring parts while I get to focus on the interesting bits. It’s great.

Here’s another fantastic use case: have an agent gen the code, think about its prototype, delete, and then rewrite it. I did that on a project with huge success: https://github.com/neurosnap/zmx

baq 7 hours ago | parent | prev | next [-]

Nothing stopping you from hand sculpting software like we did in the before times.

Mass production however won’t stop, it’s barely started literally a couple months ago and it’s the slowest and worst it’ll ever be.

halfmatthalfcat 7 hours ago | parent | next [-]

I'm not viewing AI tooling as an extinction of the art of programming, only illuminating how telling an AI how to create programs isn't in the same universe as programming, where the technical skill to do such a thing is on par with punching in how long my microwave should nuke my popcorn.

saulpw 6 hours ago | parent | prev [-]

I keep hearing "it's the slowest and worst it'll ever be" as though software ability and performance only ever increase and yet mass produced software is slower and enshittier than it was 10-15 years ago and we're all complaining about it. And you can't say "but it does so much more" because I never asked for 90% of the "more" and just want to turn most of it off.

strange_quark 6 hours ago | parent | next [-]

I’m also not convinced that any of these models are going to stick around at the same level once the financial house of cards they’re built on comes tumbling down. I wonder what the true cost of running something like Claude opus is, it’s probably unjustifiably expensive. If that happens, I don’t think this stuff is going to completely disappear but at some point companies are going to have to decide which parts are valuable and jettison the rest.

flumpcakes 6 hours ago | parent | prev [-]

I can think of a few things that could happen to sink "it's the slowest and worst it'll ever be". Even ignoring things that could happen, I think in general we're hitting a ceiling with LLMs. All the annoyances and bugs and frankly incompetence with the current models are not going away soon, despite $tn of investments. At this point it is now just about propping up this bubble so the USA doesn't have another big recession.

senordevnyc 7 hours ago | parent | prev [-]

Not really at all like this, more like being a tech lead for a team of savants who simultaneously are great at parts of software engineering, and limited at others. Though that latter category is slimmer than a year ago…

The point is, you can get lots of quality work out of this team if you learn to manage them well.

If that sounds like a “complete and utter nightmare”, then don’t use AI. Hopefully you can keep up without it in the long run.

tehnub 6 hours ago | parent | prev [-]

Beads?

cygn 5 hours ago | parent [-]

https://github.com/steveyegge/beads

timcobb 4 hours ago | parent | prev [-]

> This sounds unbearable. It doesn't sound like software development, it sounds like spending a thousand hours tinkering with your vim config

Before LLM programming, this was at least 30-50% of my time spent programming, fixing one config and build issue after another. Now I can spend way more time thinking about more interesting things.

brandonmenc 9 hours ago | parent | prev | next [-]

I admit to pangs of this, but it's really never made any sense because the implication is that the profession is now magically closed off to newcomers.

Imagine someone in the 90s saying "if you don't master the web NOW you will be forever behind!" and yet 20 years later kids who weren't even born then are building web apps and frameworks.

Waiting for it to all shake out and "mastering" it then is still a strategy. The only thing you'll sacrifice is an AI funding lottery ticket.

yoyohello13 8 hours ago | parent | next [-]

Finally a voice of reason. The tools will just get better and easier to use. I use LLMs now, but I'm not going to dump a bunch of time learning the new hotness. I'll let other people do that and pickup the useful pieces later.

Unless your gunning for a top position as a vibe coder, this whole concept of "falling behind" is just pure FOMO.

SoftTalker 2 hours ago | parent | prev | next [-]

People did say that in the 90s. Hence the rush to put everything on the web, whether there was any real business case for it or not. And most of it went up in flames at the end of that decade.

causal 7 hours ago | parent | prev | next [-]

If anything I'd expect all these tools to be easier for new engineers to adopt, unburdened by how things were before.

senordevnyc 7 hours ago | parent | prev [-]

Eh, for myself as a middle-aged software engineer, it feels a little like the last chopper out of Saigon. I feel less and less confident that I can make as good a living in software for the next decade as I have for the last couple. Or if I want to. The job is changing so fast right now, and I’m not sure I like it. When I worked in big tech, I preferred being an IC over an EM or tech lead because I like writing code. Now it feels increasingly like you can’t be an IC in that way anymore. You’re now coding through others, either humans or AI.

Sure, I can write code manually, but in my case I’m working full time on my own SaaS and I am absolutely faster and more effective with AI. It’s not even close. And the gains are so extreme that I can’t justify writing beautiful hand-crafted artisanal code anymore. It turns out that code that’s “good enough” will do, and that’s all I can afford right now.

But long-term, I don’t know that I want to do that work, especially for some corporation. It feels like the difference between being a master furniture craftsman, and then going work in an IKEA factory.

SoftTalker 2 hours ago | parent [-]

What I like to say is that writing software is getting so easy that I don't know how to do it anymore.

hamstergene 3 hours ago | parent | prev | next [-]

I feel like many people in the comments aren't aware that Karpathy is an ML scientist for whom programming is a complementary skill, not a profession. The only reason he came up with "vibe coding" is because maximum complexity of his hobby projects made it seem believable. Maybe take his opinions about fate of programming with a grain of salt.

He is brilliant no doubt, but not in that field.

ActionHank an hour ago | parent [-]

This is such a great way to frame all his comments.

superze 2 days ago | parent | prev | next [-]

As an Opus user, I genuinely don’t understand how someone can work for weeks or months without regularly opening an IDE. The output almost always fails.

I repeatedly rewrite prompts, restate the same constraints, and write detailed acceptance criteria, yet still end up with broken or non-functional code.its very frustrating to say the least Yesterday alone I spent about $200 on generations that now require significant manual rewrites just to make them work.

At that point, the gains are questionable. My biggest success is having the model take over the first Design in my app and I take it from there, but those hundred lines if not thousand lines of code it generates are so Messi, it's insanely painful to refactor the mess afterwards

SkyPuncher 8 hours ago | parent | next [-]

My trick is to explicitly roll play that we’re doing a spike. This gets all of the models to ignore all of the details they normally get hung up on. Once I have the basics in place, I can tell it to fix details.

It’s _always_ easier to add more code than it is to fix broken code.

throwatdem12311 6 hours ago | parent | prev | next [-]

I have a hell of a time just getting any LLM to write SQL queries that have things like window functions, aggregates and lateral left joins - even when shoving the entire database schema DDL into the context.

It's so frustrating, it regularly makes me want to just quit the profession. Which is why I still just write most code by hand.

data-ottawa 3 hours ago | parent | next [-]

I write a lot of SQL and I haven't had these issues for months, even with smaller models. Opus can one shot most of my queries faster than I could type them.

Instead of stuffing the context with DDL I suggest:

1. Reorganize your data warehouse. It needs to be easy to find the correct data. Make sure you use ELT clear layers, meaningful schemas, and have per-model documentation. This is a ton of work, but if done right the payoff is massive.

2. I built a tool for myself to pull our warehouse into a graph for fuzzy search+dependency chain analysis. In the spring I made an MCP server for it and Claude uses that tool incredibly well for almost all queries. I haven't actually used the GUI or scripts since I built the MCP.

Claude and Devstral are the best models I've used for SQL. I cannot get Gemini to write decent modern sql -- even the Gemini data science/engineer agents in Google Cloud. I occasionally try the paid models through the API and still haven't been impressed.

deadbabe 3 hours ago | parent | prev [-]

If you really know SQL, writing an SQL query basically just feels like writing a prompt for a database client anyway, except it does exactly what you ask for.

throwatdem12311 3 hours ago | parent [-]

I have a running joke at work.

* LLMs are just matrix multiplication. * SQL is just algebra, which has matrix multiplication as part of it. * Therefore SQL is AI * Now who is ready to invest a billion dollars in our AI SaaS company?

Or it’s just that astronaut with a gun meme: “Wait AI is just SQL?….Alway has been.”

cloudflare728 10 hours ago | parent | prev | next [-]

Sometimes I have a similar file or related files. I copy their names and say use them as reference. Code quality improves by 10 times if you do so. Even providing a a example from framework's getting started works great too for new project.

Yeah the pain of cleaning up small mess is great too. I had some tests failing and type failing issues, I thought I will fix it later by only using AI prompt. As the size was growing, failing Typescript issues was growing too. At some point it was 5000+ type issues and countless number of failing unit tests. Then more and more. I tried to fix with AI, since it was not possible fixing old way. Then I discarded the whole project when it was around 500k lines of code.

pca006132 6 hours ago | parent [-]

Question: How many LoC do you let the AI write for each iteration? And do you review that? It sounds like you are letting it run off leash.

cloudflare728 an hour ago | parent [-]

I had no idea how it would end up. It was first time using AI IDE. I had only used chatgpt.com and claude.ai for small changes before. I continued it for the experiment. I thought AI write too many tests, I will judge based on test passing. I agree, it was bad expectation + no experience with AI IDE + bad software engineering.

nowittyusername 6 hours ago | parent | prev | next [-]

Most people have not fully grasped how LLM's work and how to properly utilize agentic coding solutions. That is the reason for issues when it comes to vibe coders having low quality code. But that is not the limitation of technology but the user (at this stage). Basically think of it this way everyone is the grandma that has been handed a palm pilot to use to get things done. Grandma needs an iPhone not a palm pilot but the problem is that we are not in that territory yet. So now consider the people who were able to use the palm pilot very successfully and well, they were few and they were the exception, but they existed. Same here. I have been using coding agent for over 7 months now and have written zero lines of code, in fact I don't know how to code at all. But i have been able to architect very complex software projects from scratch. Text to speech , automated llm benchmarking systems for testing all possible llama.cpp sampling parameters and more, and now im building my own agentic framework from scratch. All of these things are possible and more without writing one line of code yourself. But it does require understanding how to use the technology well to get this done.

shepherdjerred 9 hours ago | parent | prev | next [-]

I hardly ever open an IDE anymore.

I use Claude Code and Cursor. What I do:

- use statically typed languages: TypeScript, Go, Rust, Python w/ types

- Setup linters. For TS I have a bunch of custom lint rules (authored by AI) for common feedback that I've given. (https://github.com/shepherdjerred/monorepo/tree/main/package...)

- For Cursor, lots of feedback on my desired style. https://github.com/shepherdjerred/scout-for-lol/tree/main/.c...

- Heavy usage of plan mode. Tell AI something like "make at least 20 searches to online documentation", support every claim with a reference, etc. Tell AI "make a task for every little thing you'll implement"

- Have the AI write tests, particularly the more expensive ones like integration and end-to-end, so you have an easy way to verify functionality.

- Setup Claude Code GHA to automatically review PRs. Give the review feedback to the agent that implemented it, either via copy-pasting or tell the agent "fetch review comments and fix them".

Some examples of what I've made:

- Many features for https://scout-for-lol.com/, a League of Legends bot for Discord

- A program to generate TypeScript types for Helm charts (https://github.com/shepherdjerred/homelab/tree/main/src/helm...)

- A program to summarize all of the dependency updates for my Homelab (https://github.com/shepherdjerred/homelab/tree/main/src/deps...)

- A program to manage multiple instances of CLI agents like Claude Code (https://github.com/shepherdjerred/monorepo/tree/main/package...)

- A Discord AI bot in the style of my friends (https://github.com/shepherdjerred/monorepo/tree/main/package...)

throw2312321 2 hours ago | parent | next [-]

Thanks for sharing. So the dumb question - do you feel like Claude Code & Cursor have made you significantly more productive? You have an impressive list of personal projects, and I can see how a power user of AI tools can be very effective with green field projects. Does the productivity boost translate as well to your day job?

shepherdjerred an hour ago | parent [-]

For personal projects, I have found it to be transformative. I've always struggled with perfection and doing the "boring parts". AI has allowed me to add lots of little nice-to-have features and focus less on the code.

I'm lucky enough that my workplace also uses Cursor + Claude Code, so my experience directly transfers. I most often use Cursor for day-to-day work. Claude has been great as a research assistant when analyzing how data flows between multiple repos. As an example I'm writing a design doc for a new feature and Claude has been helping me with the investigation. My workflow is more or less to say: "here are my repos, here is the DB schema, here are previous design docs, now how does system X work, what would happen if I did Y, etc."

AI is still fallible so you _do_ of course have to do lots of checking and validation which can be boring, but much easier if you add a prompt like "support every claim you make with a concrete reference".

When it comes to implementation, I generally give it smaller, more concrete pieces to work with. e.g. for a personal project I would say something like "here is everything I want to do, make a plan, do part 1, then do part 2, example: https://github.com/shepherdjerred/scout-for-lol/tree/227e784...)

At work, I tend to give it PR-sized units of work. e.g. something very well-scoped and defined. My workflow is: prompt, make a PR on GitHub, add comments on GitHub, tell Cursor "I left comments on your PR, address them", repeat. Essentially I treat AI as a coworker submitting code to me.

I don't really know that I can quantify the productive gain.. I can say that I am _much_ more motivated in the last few months because AI removes so much friction. I think it's backed up by my commit history since June/July which is when I started using Cursor heavily: https://github.com/shepherdjerred

BhavdeepSethi 5 hours ago | parent | prev | next [-]

Cursor is an IDE.

shepherdjerred 4 hours ago | parent [-]

Oh to clarify I used to use Cursor but the last month or two I've used Claude Code almost exclusively. Mostly because it seems to be more generous with credits.

moffkalast 7 hours ago | parent | prev [-]

> make at least 20 searches to online documentation

Lol sometimes I have to spend two turns convincing Claude to use its goddamn search and look up the damn doc instead of trying to shoot from the hip for the fifth time. ChatGPT at least has forced search mode.

shepherdjerred 7 hours ago | parent [-]

I've found that telling it to specifically do N searches works consistently. I do really wish Claude Code had a "deep research" mode similar to 'normal' Claude.

tmaly 8 hours ago | parent | prev | next [-]

What does your software creation workflow look like? Do you have a design phase?

miguel_martin 10 hours ago | parent | prev | next [-]

This is what an AGENTS.md - https://agents.md/ (or CLAUDE.md) file is for. Put common constraints to correct model mistakes/issues with respect to the codebase, e.g. in a “code style” section.

falcor84 a day ago | parent | prev | next [-]

Why would you spend $200 a day on Opus if you can pay that for a month via the highest tier Claude Max subscription? Are you using the API in some special way?

jefffoster 10 hours ago | parent [-]

At a guess an Enterprise API account. Pay per token but no limits.

It’s very easy to spend $100s per dev per day.

simonw 8 hours ago | parent | next [-]

The $200/month plan doesn't have limits either - they have an overage fee you can pay now in Claude Code so once you've expended your rate limited token allowance you can keep on working and pay for the extra tokens out of an additional cash reserve you've set up.

merlincorey 7 hours ago | parent [-]

> The $200/month plan doesn't have limits either... once you've expended your rate limited token allowance... pay for the extra tokens out of an additional cash reserve you've set up

You're absolutely right! Limited token allowance for $200/month is actually unlimited tokens when paying for extra from a cash reserve which is also unlimited, of course.

simonw 7 hours ago | parent [-]

I think you may have misunderstood something here.

When paying for Claude Max even at $200/month there are limits - you have a limit to the number of tokens you can use per five hour period, and if you run out of that you may have to wait an hour for the reset.

You COULD instead use an API key and avoid that limit and reset, but that would end up costing you significantly more since the $200/month plan represents such a big discount on API costs.

As-of a few weeks ago there's a third option: pay for the $200/month plan but allow it to charge you extra for tokens when you reach those limits. That gives you the discount but means your work isn't interrupted.

Extra Usage for Paid Claude Plans: https://support.claude.com/en/articles/12429409-extra-usage-...

merlincorey 7 hours ago | parent [-]

Thank you for the explanation, but I did fully understand that is what you were saying.

What I don't fully understand is how you can characterize that as "not limited" with a straight face; then again, I can't see your face so maybe you weren't straight faced as you wrote it in the first place.

Hopefully you could see my well meaning smile with the "absolutely right" opening, but apparently that's no longer common so I can understand your confusion as https://absolutelyright.lol/ indicates Opus 4.5 has had it RLHF'd away.

simonw 7 hours ago | parent [-]

When I said "not limited" I meant "no longer limits your usage with a hard stop when you run out of tokens for a five hour period any more like it did until a few weeks ago".

That's why I said "not limited" as opposed to "unlimited" - a subtle difference in word choice, I'll give you that.

falcor84 6 hours ago | parent | prev [-]

Oh, I wasn't arguing that it isn't "easy to spend $100s per dev per day". I was just asking what the use-case for that is.

christophilus 2 days ago | parent | prev [-]

I’ve had decent results from it. What programming language are you using?

Aldipower 9 hours ago | parent | prev | next [-]

I am a software developer and mainly a programmer for decades now. I love programming. I love to be "once" with the computer. I will never give this joy up. If I need to sell shoes at daytime, I will program real computer programs in the evenings. If it won't be possible with modern machinery anymore, I will take my Commodore 64. I am a free man.

Edit: Corrected since/for. :-)

NooneAtAll3 9 hours ago | parent | next [-]

(for decades)

('since' takes time_point - 'for' takes time_duration)

computersuck 7 hours ago | parent | prev [-]

you mean "one" not "once" right?

PessimalDecimal 5 hours ago | parent [-]

Just once. Just for a night.

noosphr 3 hours ago | parent | prev | next [-]

The only time I've felt this much behind was in high school when everyone was talking about how much sex they were having.

AI code is the Canadian girlfriend of programming.

YouWhy 25 minutes ago | parent [-]

Touché! That's a good one.

reconnecting 9 hours ago | parent | prev | next [-]

> OpenAI's sales and marketing expenses increased to _$2 billion_ in the first half of 2025.

Looks like AI companies spend enough on marketing budgets to create the illusion that AI makes development better.

Let's wait one more year, and perhaps everyone who didn't fall victim to these "slimming pills” for developers' brains will be glad about the choice they made.

9dev 8 hours ago | parent | next [-]

Well. I was a sceptic for a long time, but a friend recently convinced me to try Claude Code and showed me around. I revived an open source project I regularly get back to, code for a bit, have to wrestle with toil and dependency updates, and loose the joy before I really get a lot done, so I stop again.

With Claude, all it took to fix all of that drudge was a single sentence. In the last two weeks, I implemented several big features, fixed long standing issues and did migrations to new major versions of library dependencies that I wouldn’t have tackled at all on my own—I do this for fun after all, and updating Zod isn’t fun. Claude just does it for me, while I focus on high-level feature descriptions.

I’m still validating and tweaking my workflow, but if I can keep up that pace and transfer it to other projects, I just got several times more effective.

reconnecting 7 hours ago | parent [-]

This sounds to me like a lack of resource management, as tasks that junior developers might perform don't match your skills, and are thus boring.

As a creator of an open-source platform myself, I find trusting a semi-random word generator in front of users unreliable.

Moreover, I believe it creates a bad habit. I've seen developers forget how to read documentation and instead trust AI, and of course, as a result AI makes mistakes that are hard to debug or provokes security issues that are easy to overlook.

I know this sounds like a luddite talking, but I'm still not convinced that AI in its current state can be reliable in any way. However, because of engineers like you, AI is learning to make better choices, and that might change in the future.

pca006132 6 hours ago | parent | next [-]

> as tasks that junior developers might perform don't match your skills, and are thus boring.

Yeah this sounds interesting, and matches my experience a bit. I was trying out AI for the Christmas cuz people I know are talking about it. I asked it to implement something (refactoring for better performance) that I think should be simple, it did that and looks amazing, all tests passed too! When I look into the implementation, AI got the shape right, but the internals were more complicated than needed and were wrong. Nonetheless it got me started into fixing things, and it got fixed quite quickly.

The performance of the model in this case is not great, perhaps it is also because I am new to this and don't know how to prompt it properly. But at least it is interesting.

9dev 7 hours ago | parent | prev [-]

That’s a totally fair take IMHO, and I’m very much conflicted on several ends on this topic—for example, would I want my juniors to use an agent? No; not even the mid levels, probably. As you say, it’s easy to form bad habits, and you need a good intuition for architecture and complexity, otherwise you end up with broken, unmaintainable messes. but if you have that, it’s like magic.

CamperBob2 7 hours ago | parent | prev [-]

Let's wait one more year, and perhaps everyone who didn't fall victim to these "slimming pills” for developers' brains will be glad about the choice they made.

In that year, AI will get better. Will you?

reconnecting 7 hours ago | parent [-]

AI is only getting better at consuming energy and wasting people's time communicating with this T9. However, if talented engineers continue to use it, it might eventually provide more accurate replies as a result.

Answering your question, no matter how much I personally degrade or improve, I will not be able to produce anything even remotely comparable in terms of negative impact that AI brings to humanity these days.

kakapo5672 7 hours ago | parent [-]

I see this logical pairing a lot.

1) AI is basically useless, a mere semi-random word generator. 2) And it is so powerful that it is going to hurt (or even destroy) humanity.

This is this is called "having your cake, and letting it eat you too".

CamperBob2 24 minutes ago | parent | next [-]

There's no point arguing with someone who's not only wrong, but who doesn't care if they're wrong. ("I will not be able to produce anything even remotely comparable in terms of negative impact that AI brings to humanity these days.")

There are basically no conditions under which one party can or will reach a legitimate common ground with the other. Sucks, but that's HN nowadays.

ewoodrich 6 hours ago | parent | prev [-]

That's a dishonest framing of their argument. There's nothing logically inconsistent in believing wide adoption of AI tools causes developers' skills to atrophy and that the tools also fail to deliver on the hype/promises.

You're inserting "destroy humanity" when OP is suggesting the problem is offloading all thinking to an unreliable tool (I don't entirely agree with their position but it's defensible and not as you stated).

gghffguhvc 9 hours ago | parent | prev | next [-]

My company takes between Christmas and New Years off. I took a week before that off too. I have not used AI in that time. The slower pace of life is amazing. But when I get back to coding it will be back to running at 180%. It’s the new norm. However I’ve decided to take longer “no computer” breaks in my day. I have to adapt but I need to defend my “take it slow” times and find some analogue hobbies. The shift is real and you can’t wind it back.

sshine 9 hours ago | parent [-]

I’ve been taking my son for stroller walks more often over Christmas. I bring a headset for listening to music, podcasts, audiobooks, tech talks. “Be effective.” But I end up just walking and thinking, realising this is “free time”.

It sounds ridiculous and easy to say spending time walking and thinking will improve your decisions and priorities that no productivity hack will.

I only actually did slow down for a while because I had to for the well-being of my family. Sure feels important to not always be on top of everyone else’s business.

justatdotin 4 hours ago | parent | prev | next [-]

I think it's mistaken to think in terms of 'falling behind' or 'catching up'

I've seen that these tools have different uses for different devs. I know on my current team, each of us devs works very differently to one another, and we make significant allowances to accommodate for one another's different styles. Certain tasks always go to certain devs; one dev is like a steel trap, another is the chaos explorer, another's a beginner, another has great big-picture perspective, etc. (not sure why but there's even space for myself ;)

In the same way, different devs use these powerful tools in very different ways. So don't imagine you're falling behind, because the only useful benchmark is yourself. And don't imagine you can wait for consensus: you'll still need to identify your personal relationship to the tools.

Most of all, don't be discouraged. Even if you never embrace these tools, there will remain space for your skills and your style of approaching our shared work.

Give it another 10 years and I'm sure this will all become clearer...

ChrisMarshallNY 4 hours ago | parent [-]

I’ve become comfortable with using LLMs as “trusted advisors.”

I am not [yet] ready to just let an agent write a whole app or server for me, but I am increasingly letting them write a whole function for me.

They are also great “bug finders.” I can just feed some code, describe the symptoms, and ask for an observation. I often get great suggestions, including things like finding typos and copy/pasta problems.

I find that just this limited application has significantly increased my development velocity, and, I believe, the quality of my work.

wmoxam 3 hours ago | parent [-]

IMO LLMs make for a great rubber duck https://en.wikipedia.org/wiki/Rubber_duck_debugging

BhavdeepSethi 5 hours ago | parent | prev | next [-]

Most of the folks that are talking about this are the ones who work independently and work on greenfield projects (especially tooling related). The cost of making a mistake there is so low. I've used it similarly and it's absolutely amazing. Though I still use a mix of agents and code myself in my regular 9-5 job.

I've yet to see examples of folks using this in a team of 4+ folks working together in a production env with users, and just using AI for their regular development.

Claude code creator only using claude code doesn't count. That's more like dog-fooding.

xzkll 3 days ago | parent | prev | next [-]

Does any of you bother the fact that now you have to pay money in order to do your job? I mean AI model subscriptions. Somehow it feels wrong for me to pay for tools that are trying to replace me.

2sk21 9 hours ago | parent | next [-]

IDEs used to be extremely expensive back in the 1990s. IDEs such as Microsoft Visual Studio and IBM's Visual age for Java were quite expensive subscription as I recall. subsequently, open source IDEs like Eclipse and VisualStudio seem to have become the norm.

abeyer 9 hours ago | parent | next [-]

Visual Studio has never been open source, though some of the underlying build tools and compilers are.

Visual Studio Code is a different thing... and claims to be open source, but by intent and approach really is closer to source available.

aeonik 6 hours ago | parent | prev [-]

Compilers and programming languages themselves used to be hideously expensive as well.

threetonesun 9 hours ago | parent | prev | next [-]

Between subscription software and subscription AI and the rising prices of computer hardware, the idea of a "personal computer" is quickly dying.

Aldipower 9 hours ago | parent [-]

Not for me.

wild_egg 7 hours ago | parent | prev | next [-]

Your employer is not paying for these things?

mptest 2 days ago | parent | prev [-]

paying to train* and fund the research for the tools to replace us

bmitch3020 7 hours ago | parent | prev | next [-]

https://xcancel.com/karpathy/status/2004607146781278521

rishabhaiover 3 days ago | parent | prev | next [-]

For the longest time, the joy of creation in programming came from solving hard problems. The pursuit of a challenge meant something. Now, that pursuit seems to be short-circuited by an animated being racing ahead under a different set of incentives. I see a tsunami at the beach, and I’m not sure whether I can run fast enough.

skybrian 10 hours ago | parent | next [-]

I see it more like a playing a text adventure game. You give it commands and sometimes it works, and sometimes the results are unexpected.

zephen 2 hours ago | parent [-]

Personally, I've never been interested in being a character in someone else's story.

But now you've got me thinking. Has anyone studied whether the programmers who are more enamored of AI are also into RPGs?

condensedcrab 3 days ago | parent | prev | next [-]

Not to mention many companies speedrunning systems of strange and/or perverse incentives with AI adoption.

That being said, Welch’s grape juice hasn’t put Napa valley out of business. Human taste is still the subjective filter that LLMs can only imitate, not replace.

I view LLM assisted coding (on the sliding scale from vibe coding to fancy auto complete) similar to how Ableton and other DAW software have empowered good musicians that might not have made it otherwise due to lack of connections or money, but the music industry hasn’t collapsed completely.

tjr 3 days ago | parent [-]

In the music world, I would say that, rather than DAWs, LLM-assisted coding is more like LLM-assisted music creation.

design2203 3 days ago | parent [-]

Yep DAW’s aren’t the comparison. People are not thinking deeply about what is going on - there is a big war on-going in order to eradicate taste and make it systematic to immensely benefit the few.

m463 3 days ago | parent | prev | next [-]

> I can run fast enough.

Can you do some code reviews while you're running?

nextworddev 3 days ago | parent | prev [-]

(Inception scene) here a minute is seven hours

sureglymop 9 hours ago | parent | prev | next [-]

> strengths and pitfalls of fundamentally stochastic, fallible, unintelligible and changing entities suddenly intermingled with what used to be good old fashioned engineering

Sounds fever dreamish. Thank you sincerely (not) for creating it!

presentation 6 hours ago | parent | prev | next [-]

Anything sufficiently useful will be productized and packaged up by somebody out there so that the masses can use it, the rest will be niche and only relevant for the most hardcore enthusiasts, so I’m not so worried.

kusokurae 36 minutes ago | parent | prev | next [-]

This is sales propaganda that should not be endorsed by sharing or further publication.

budududuroiu an hour ago | parent | prev | next [-]

Idk why people take everything that Karpathy says as canon. I find his takes post inventing the "vibe coding" term deeply unserious and vapid

gaigalas 3 days ago | parent | prev | next [-]

> Clearly some powerful alien tool was handed around except it comes with no manual

Using tools before their manual exists is the oldest human trick, not the newest.

badgersnake 12 minutes ago | parent | prev | next [-]

If there was more substance behind the hype this might actually be true. But unless you’re in some very specific niches, it’s bollocks.

You’re not doing it wrong, the tools just aren’t all they’re cracked up to be.

zmmmmm 3 hours ago | parent | prev | next [-]

Definitely don't hang out on Hacker News then. It's absolutely the worst place for imposter syndrome or people with any kind of skill inferiority anxiety or confidence issue. Half the reason I read HN is because the anxiety it induces is moderately constructive in motivating me to ensure I keep learning and stay up to date. But I definitely come away every day with a distinct impression I'm below baseline in skill and knowledge for my field, even though within my own circles I'm considered expert by all my peers.

finolex1 4 hours ago | parent | prev | next [-]

Is there anything substantial in his list ("agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations") that Claude Code or Cursor don't already incorporate?

I empathize with his sense that if we could just provide the right context and development harness to an AI model, we could be *that* much more productive, but it might just be misplaced hope. Claude Code and Cursor are probably not that far from the current frontier for LLM development environments.

nineteen999 5 hours ago | parent | prev | next [-]

I'm actually having more fun than I've had in years with this, since I've mainly focussed on my personal projects while getting the hang of what's achievable. And it turns out to be quite a lot if you're a creative thinker.

At first it kind of depressed me, but now I realised that actually writing code is only part of my day job, the rest is integrating infrastructure and managing people and enabling them to do their job as well, and if I can do the coding/integration part faster and give them better tools more quickly, that's a huge win.

This means I can spend more time at the beach and on my physical and mental well being as well. I was stubborn and skeptical a year ago, but now I'm just really enjoying the process of learning new things.

_pdp_ 3 hours ago | parent | prev | next [-]

Over 20 years professional experience here. LLM tools feel great. A single person can now accomplish what used to require many teams.

bopbopbop7 2 hours ago | parent [-]

Over 30 years code artisan here. AI has made me 100x more productive. No, I will not provide proof. Sam Altman is the best.

zmj 2 hours ago | parent | prev | next [-]

Yes-ish. It's worth keeping up with the rising tide of model capabilities, but it's not worth stressing over eliciting every last drop. Many of the specific techniques that add value today will be wasted effort with smarter models in a month or two.

clejack 5 hours ago | parent | prev | next [-]

For the folks who have more positive outlooks how often do you change your code after it's been generated?

I haven't used agents much for coding, but I noticed that when I do have something created with the slightest complexity, it's never perfect and I have to go back and change it. This is mostly fine, but when large chunks of code are created, I don't have much context for editing things manually.

It's like waking up in a new house that you've never seen before. Sure I recognize the type of rooms, the furniture, the outlets, appliances, plumbing, and so on when I see them; but my sense of orientation is strained.

This is my main issue at the moment.

fragsworth 5 hours ago | parent [-]

> For the folks who have more positive outlooks how often do you change your code after it's been generated?

Every time, unless my initial request was perfectly outlined in unambiguous pseudocode. It's just too easy to write ambiguous requests.

Unambiguous but human-readable pseudocode is what I strive for now, though I will often ask AI to help edit the pseudocode to remove ambiguities prior to generating code.

PaulDavisThe1st 4 hours ago | parent | prev | next [-]

He should join the Ardour project. Or go to work for Ableton or Bitwig or Presonus or Digidesign or MOTU or any other DAW manufacturer. Or any video or image editing application. Or get involved with more or less any complex, "creative" native desktop application.

All of the stuff he feels he is falling behind on? Almost completely irrelevant in our domain.

senordevnyc 4 hours ago | parent [-]

That’s interesting. I wonder if the models will improve on these kinds of tiny niches?

xg15 8 hours ago | parent | prev | next [-]

And there it is again, the "powerful alien tool" that was just "handed to us".

No decades of research and massive allocation of resources over the last few years as well as very intentional decision making by tech leadership to develop this specific technology.

Nope, it just mysteriously dropped from the sky one day.

layer8 7 hours ago | parent | next [-]

The point is that all that research mostly doesn’t help in mastering the tool. Unlike traditional tools, it doesn’t come with an instruction manual. It’s like an alien tool just handed to us in exactly that sense.

Kuinox 8 hours ago | parent | prev [-]

Do you know who is the author ?

techblueberry 7 hours ago | parent | next [-]

It’s written in the title of the post “Andrew Karpathy” he’s fairly well known in AI circles, he was head of autopilot at Tesla, and co-founded OpenAI. If you’re curious to learn more about him, the Wikipedia page has a short summary: https://en.wikipedia.org/wiki/Andrej_Karpathy

jeltz 7 hours ago | parent | prev | next [-]

It is even worse coming from him.

xg15 8 hours ago | parent | prev [-]

Yes, and I'm disappointed he seems to have joined the AI mysticism crowd.

paxys 8 hours ago | parent | prev | next [-]

I have never felt this much ahead as a programmer. So many developers I see, including at my workplace, are blindly prompting models hoping to solve their problem and failing every step of the way. The people who truly understand what is happening are still in the ruling class, and their skills are not going to be irrelevant anytime soon.

sod22 2 hours ago | parent | next [-]

Yep when all this blows over, those who were least exposed to LLMs will be the winners. Patience is important and not to be drowned out by the noise.

georgeburdell 7 hours ago | parent | prev | next [-]

Not sure what you mean by blindly prompting models

misiti3780 8 hours ago | parent | prev [-]

100% - I cant believe there are smart people in this conversation that dont see this.

If you dont understand AWS you can't vibe code a terraform codebase that creates a complex infrastructure .. etc

kshri24 an hour ago | parent | prev | next [-]

> Roll up your sleeves to not fall behind

This confirms AI bubble for me and it now being entirely FUD driven. "Not fall behind" should only apply to technologies where you have to put active effort to learn as it requires years to hone and master the craft. AI is supposed to remove this "active effort" part so as to get you upto speed with the latest and bridge the gap between those "who know" and those "who do not". The fact you need to say "roll up your sleeves to not fall behind" confirms we are not in that situation yet.

In other words, it is the same old learning curve that everyone has to cross EXCEPT this time it is probabilistic instead of linear/exponential. It is quite literally a slightly better than coin toss situation when it comes to you learning the right way or not.

For me personally, we are truly in that zone of zero active effort and total replacement when AI can hit a 100% on ALL METRICS consistently, every single time, even on fresh datasets with challenging questions NOT SEEN/TRAINED by the model. Even better if it can come up with novel discoveries to remove any doubts. Chances of achieving that with current tech is 0%.

Animats 3 hours ago | parent | prev | next [-]

I feel that way, too.

"Vibe programming" is less than a year old. What is programming going to look like in a few years?

anonzzzies 3 hours ago | parent | prev | next [-]

I wish I got out more. I used to go a lot to meetups and sit next to people 'closer to the hype' showing me the cutting edge stuff; often it was just a 'meh' experience vs the 'this is like seeing god' type of comments on hn/reddit and sometimes it is an eye opener (rarely). The 'meh' is usually when people claim it is 10000x more productive: I sit next to them and seeing them struggle to get even the basics done; after that, they struggle with the same issues I do when I try it while they are the 'experts' and I learn that people call things productive when they are kept 'busy' not actually producing results faster.

Anyway:

> agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations,

give me extreme Emacs 'setup' feelings: I was at a meetup in hk recently where there was someone advocating this and it was just depressing; spending hours on stuff that changes daily while just my vanilla claude code with playwright mcp runs circles around it, even after it has been set up. It is just not better at all and until someone can show that it is actually an improvement WITH the caveat that when it is an improvement on t(1), it doesn't need a complete overhaul at t(n) where n is a few days or weeks just because the hype machine says so. This measured against a vanilla CC without any added tooling except maybe playwright mcp.

People just want to scam themselves in feeling useful: if the ai does the work, then you find some way of feeling busy by adding and finetuning stuff to feel useful.

albert_e 5 hours ago | parent | prev | next [-]

I can attest to one thing that has grown 10x for sure -- FOMO.

PaulHoule 6 hours ago | parent | prev | next [-]

I don't have a lot of patience for this sort of take because my north star is project management and in my normal moving forward model I work in milestones where I stack up my tools and get something specific done and screwing around with tools is heavily timeboxed. If A.I. tools help me make progress great, if they don't, I will fall back to manual methods, get that phase of work done or (rarely) give up on the subproject. After I get some distance from it I can consolidate my learnings, try a different approach.

It's death though to be excessively reading tweets and blogs about this stuff, this will have you exhausted before you even try a real project and comparing yourself to other people's claims which are sometimes lies, often delusional, ungrounded and almost always self-serving. In sofar someone is getting things done with any consistency they are practicing basic PM, treating feelings of exhaustion, ungroundedness and especially going in circles as a sign to regroup, slow down and focus on the end you have in mind.

If the point really is to research tools than what you do is break down that work into attainable chunks, the way you break down any other kind of work.

alphazard 10 hours ago | parent | prev | next [-]

The thing that always trips me up is the lack of isolation/sandboxing that all of the AI programming tools provide. I want to orchestrate a workforce of agents, but they can't be trusted not to run amok.

Does anyone have a better way to do this other than spinning up a cloud VM to run goose or claude or whatever poorly isolated agent tool?

dnw 10 hours ago | parent | next [-]

I have seen Claude disable its sandbox. Here is the most recent example from a couple of weeks ago while debugging Rust: "The panic is due to sandbox restrictions, not code errors. Let me try again with the sandbox disabled:"

I have since added a sandbox around my ~/dev/ folder using sandbox-exec in macOS. It is a pain to configure properly but at least I know where sandbox is controlled.

resfirestar 9 hours ago | parent [-]

That refers to the sandbox "escape hatch" [1], running a command without a sandbox is a separate approval so you get another prompt even if that command has been pre-approved. Their system prompt [2] is too vague about what kinds of failures the sandbox can cause, in my experience the agent always jumps straight to disabling the sandbox if a command fails. Probably best to disable the escape hatch and deal with failures manually.

[1] https://code.claude.com/docs/en/sandboxing#configure-sandbox...

[2] https://github.com/Piebald-AI/claude-code-system-prompts/blo...

shepherdjerred 9 hours ago | parent | prev | next [-]

I'm working on a solution [0] for this. My current approach is:

1. Create a new Git worktree

2. Create a Docker container w/ bind mount

3. Provide an interface for easily switching between your active worktrees/containers.

For credentials, I have an HTTP/HTTPS mitm [1] that runs on the host with creds, so there are zero secrets in the container.

The end goal is to be able to manage, say, 5-10 Claude instances at a time. I want something like Claude Code for Web, but self-hosted.

[0]: https://github.com/shepherdjerred/monorepo/tree/main/package...

[1]: https://github.com/shepherdjerred/monorepo/pull/156

aoeusnth1 6 hours ago | parent [-]

This is also what I did. Actually, Claude did it.

ciconia 9 hours ago | parent | prev | next [-]

If they cannot be trusted, why would you use them in the first place?

CamperBob2 7 hours ago | parent | next [-]

For the same reason you'd build a fire.

zephen 9 hours ago | parent | prev [-]

Obviously people perceive value there, but on the surface it does seem odd.

"These things are more destructive than your average toddler, so you need to have a fence in place kind of like that one in Jurassic Park, except you need to make sure it absolutely positively cannot be shut off, but all this effort is worthwhile, because, kind of like civets, some of the artifacts they shit out while they are running amok appear to have some value."

chasd00 8 hours ago | parent | next [-]

It’s shocking the collective shrug I get from our security people at work. I attend pretty serious meetings about genAI implementations and when I ask about points of view around security given things as crazy as “adversarial poetry” is a real thing I just get shrugs. I get the feeling they don’t want to be the ones to say “no, don’t bring genai to our clients” but also won’t dare say “yes, our client’s data is safe with integrated genai”.

ares623 9 hours ago | parent | prev [-]

Love the mix of metaphors.

ashishb 10 hours ago | parent | prev [-]

I run them inside a sandbox https://github.com/ashishb/amazing-sandbox

design2203 3 days ago | parent | prev | next [-]

I’m convinced much of this is all noise - people seem to be focusing on the wrong unit of analysis. Producing software and lots of it has never been a problem - coming up with the right projects and producing a vertically differentiated product to what already exists is.

rishabhaiover 3 days ago | parent [-]

That's true. The noise is being generated by people who are directly or indirectly incentivized to talk about it.

> coming up with the right projects and producing a vertically differentiated product to what already exists is.

Agreed but not all engineers are involved with this aspect of the business and the concern applies to them.

1970-01-01 10 hours ago | parent | prev | next [-]

I love that Agile and Scrum is still unmentioned. Can we stick a fork in it yet?

layer8 7 hours ago | parent | next [-]

Don’t you do retrospectives with your coding agents?

zephen 9 hours ago | parent | prev [-]

No, no, no.

We need to have a scrum with 3 agents each from the top 4 AI vendors, with each agent adhering to instructions given by a different programmer.

It's kind of like Robot Wars, except the damage is less physical and more costly.

kazinator 2 hours ago | parent | prev | next [-]

Guy is a wacko.

tehjoker 7 hours ago | parent | prev | next [-]

The person saying this has a financial interest in saying so.

ciconia 9 hours ago | parent | prev | next [-]

I for one am not using AI, will not touch that steaming pile of manure with a 10 yard stick, and I couldn't care less about the so called magnitude 9 earthquake. When this bubble finally bursts into nothingness, I'll be still here practicing my craft and providing real value for my clients.

llmslave2 9 hours ago | parent [-]

I'm using it less and less now, since the sheen has worn off and I've been able to more accurately judge its capabilities. It's like an intern at everything it does and unfortunately I'm expected to produce better code than that.

Capricorn2481 8 hours ago | parent | next [-]

I'm very confused, are you or are you not an LLM run account?

A couple weeks ago, under a freshly made account "llmslave", you said it's already replacing devs and the field is cooked, and anyone who doesn't see that lacks the skills to adopt AI [1]

I pointed out that given your name and low quality comments, you were likely an LLM run account. As SOON as I made that comment, you abandoned the account and have now made a duplicate llmslave2 account, with a different opinion

Are you doing an experiment or something?

[1] https://news.ycombinator.com/item?id=46291504#46292968

llmslave2 7 hours ago | parent [-]

No, I'm just a fan account. No affiliation with the OG llmslave, I just thought the name and concept was funny.

tehlike 8 hours ago | parent | prev [-]

When was the last time you used it?

llmslave2 8 hours ago | parent [-]

An agent like Claude code? Maybe a few weeks ago. I use ai autocomplete and ask Claude to explain basic stuff outside my wheelhouse, generate throwaway bash scripts, etc. And I have Claude review code I'm unsure of / rubber ducky debugging, but that's about it.

Gimpei 2 hours ago | parent | prev | next [-]

I think people need to chill out on this thread. LLMs are neither pure slop nor the end of the programming profession. They are immensely useful tools, particularly for tedious tasks or for quickly getting up to speed on a new API or syntax. They’re great for catching bugs too. Every now and again I’ll give an LLM a prompt and it will knock it out of the park, but that’s exceedingly rare. Most of the time, though, it just allows me to focus on the more interesting parts of my job. In short, for now at least, it is a big productivity booster, not a career ender.

tjr 3 days ago | parent | prev | next [-]

Being a nondeterministic tool, the output for a given input can vary. Rather than having a solid plan of, "if I provide this input, then that will happen", it's more like, "if I do something like this, I can expect something like that, probably, and if not, then try again until it works, I suppose".

What are the productivity gains? Obviously, it must vary. The quality of the tool output varies based on numerous criteria, including what programming language is being used and what problem is trying to be solved. The fact that person A gets a 10x productivity increase on their project does not mean that person B will also get a 10x productivity increase on their project, no matter how well they use the tool.

But again, tool usage itself is variable. Person A themselves might get a 10x boost one time, and 8x another time, and 4x another time, and 2x another time.

grim_io 3 days ago | parent | next [-]

Non determinism does not imply non correctness. You can have the LLM do 10 different outputs, but maybe all 10 are valid solutions. Some might be more optimal in certain situations, and some might appeal to different people aesthetically.

tjr 3 days ago | parent [-]

Nondeterminism indeed does not imply non-correctness.

All ten outputs might be valid. All ten will almost certainly be different -- though even that is not guaranteed.

The OP referred to the notion of there being no manual; we have to figure out how to use the tool ourselves.

A traditional programming tool manual would explain that you can provide input X and expect output Y. Do this, and that will happen. It is not so clear-cut with AI tools, because they are -- by default, in popular configurations -- nondeterministic.

grim_io 3 days ago | parent [-]

We are one functional output guarantee away from them being optimizing compilers.

Of course, we maybe never get there :)

tjr 3 days ago | parent [-]

Why would one opt to use an LLM-based AI tool as a compiler? It seems that would be extraordinarily complex over traditional compilers, but for what benefit?

grim_io 3 days ago | parent [-]

It would be, in its ideal state a vague problem to concrete and robust implementation compiler.

A star trek replicator for software.

Obviously we are nowhere near that, and we may never arrive. But this is the big bet.

optimalsolver 17 hours ago | parent [-]

>A star trek replicator for software

That's a very interesting way to put it.

general1465 3 days ago | parent | prev | next [-]

Non determinism of AI feels like a compiler which will on same input code spit out different executable on every run. Fixing bugs will become more like a ritual to satisfy whims of the machine spirit.

fragmede 3 days ago | parent [-]

But how different? Compilers do, in fact, spit out different binaries with each run. There are timestamps and other subtle details embedded in them (esp compiler version and linking) that make the same source result in a different binary. "That's different"; "that's not the same thing!" I see you thinking. As long as the AI prompt "make me a login screen" results in a login screen appropriate for the rest of the code, and not "rm -rf ~/", does it matter if the indeterminism produces a login page with a Google login page before the email login button or after?

stack_framer 3 days ago | parent | prev [-]

Also interesting is the possibility that a 10x boost for person A might still be slower than person B not using AI.

bgwalter 8 hours ago | parent | prev | next [-]

This is from the man who has no finished open source projects and who recommended camera-only FSD to Tesla, which he also did not finish.

The actually productive programmers, who wrote the stack that powers the economy before and after 2023 need not listen to these cheap commercials.

anonnon 5 hours ago | parent | next [-]

> FSD to Tesla, which he also did not finish.

That's why I've never understood HN's continuing infatuation with him. He failed to deliver FSD to Tesla, and arguably even sent them down a R&D dead end, and he doesn't seem to have played a significant role in the generative AI revolution, only joining OpenAI after they developed ChatGPT. Yet when his talks or blog posts get posted here, they're met with almost uniformly positive comments, often many.

He reminds me of Sam Altman, where for a while, pointing out that pg's emperor was naked, that his first big "success" was a startup, Loopt, that devolved into a seedy, gaunt gay hookup app, slowly wasting away, that only got acquired thanks to face-saving VC string-pulling, and that that "success" was the springboard of all that followed (YC presidency, feeling out a gubernatorial campaign, OpenAI CEO)--that would get you swiftly flagged.

threeducks 7 hours ago | parent | prev | next [-]

> This is from the man who has no finished open source projects

To be fair, which open source project can really claim that it is "finished", and what does "finished" even mean?

The only projects that I can truly call "finished" are those that I have laid to rest because they have been superseded by newer technologies, not because they have achieved completeness, because there is always more to do.

bgwalter 7 hours ago | parent | next [-]

Then replace "finished" with "production software".

bdangubic 7 hours ago | parent | prev | next [-]

> not because they have achieved completeness, because there is always more to do.

this is because SWEs love bloat and any good idea eventually needs to balloon into some ever-growing monstrosity :)

bdangubic 7 hours ago | parent | prev [-]

> To be fair, which open source project can really claim that it is "finished", and what does "finished" even mean?

https://github.com/left-pad

CamperBob2 7 hours ago | parent | prev [-]

who recommended camera-only FSD to Tesla

That's a bummer if true. Is there a reliable source that lays that decision at Karpathy's feet?

bgwalter 7 hours ago | parent [-]

He was "AI" director at Tesla from 2017:

https://www.teslarati.com/tesla-ai-director-hiring-autopilot...

He gave a glowing recommendation for camera-only FSD in 2021:

https://thenextweb.com/news/tesla-ai-chief-explains-self-dri...

Then he left Tesla in 2022. So yes, you could argue that it was all Elon's fault and he just followed for 5 years. We won't know with 100% certainty, I'd find it odd to stay 5 years if you think it doesn't work.

CamperBob2 4 hours ago | parent [-]

Ouch, thanks for the cite.

What a weird, dumb call that was. "I don't always tackle the toughest engineering problems where lawsuits and lives are at stake, but when I do, I chug a few beers first and tie one hand behind my back."

davesque 7 hours ago | parent | prev | next [-]

Honestly surprised at this take by him. For one, feels like exaggeration. For two, are these tools really that hard to use?

krackers 5 hours ago | parent | next [-]

I'm surprised too, considering that in https://x.com/karpathy/status/1977758204139331904 he mentioned regarding his NanoChat repo

>Good question, it's basically entirely hand-written (with tab autocomplete). I tried to use claude/codex agents a few times but they just didn't work well enough at all and net unhelpful, possibly the repo is too far off the data distribution.

And a lot of the tooling he mentioned in OP seems like self-imposed unnecessarily complexity/churn. For the longest time you could say the same about frontend, that you're so behind if you're not adopting {tailwind, react, nodejs, angular, svelte, vue}.

At the end of the day, for the things that an LLM does well, you can achieve roughly the same quality of results by "manually" pasting in relevant code context and asking your question. In cases where this doesn't work, I'm not convinced that wrapping it in an agentic harness will give you that much better results.

Most bespoke agent harnesses are obsoleted by the time of the next model release anyway, the two paradigms that seem to reliably work are "manual" LLM invocation and LLM with access to CLI.

moduspol 5 hours ago | parent | prev [-]

I think the evidence is that even amongst evangelists, they all seem to have different sets of key techniques that change every few months.

6thbit 5 hours ago | parent | prev | next [-]

Sounds to me like Karapathy is in the "valley of despair" of the Dunning-Kruger effect of AI tools.

He knows the tools, he's efficient with them and yet he just now understands how much he's unable to harness at this point that makes him feel left behind.

Looking forward to see what comes out of him climbing that slope.

leecommamichael 2 days ago | parent | prev | next [-]

Mind you he is in the industry, and founding a company whose success depends on this stuff.

overtone1000 10 hours ago | parent [-]

He meant to post that from his alt account 'regularcoderguy'

deadbabe 3 hours ago | parent | prev | next [-]

I think this is mostly a frontend sentiment.

In the backend, we're mostly just pushing data around from one place to another. Not much changes, there's only a few ways to really do that. Your data structures change, but ultimately the work is the same. You don't even really need an LLM at all, or super complex frameworks and ORMs, etc.

LogicFailsMe 8 hours ago | parent | prev | next [-]

Countdown to his youtube course explaining it all for beginners commences...

CamperBob2 7 hours ago | parent [-]

His "youtube course" already exists, and it's absolutely transformational.

He's working on a more formal educational framework/service of some kind, which will presumably not be free, but what he's already posted is some of the most effective CS pedagogy I've ever encountered (and personally benefited from.)

LogicFailsMe 7 hours ago | parent [-]

If he publishes something in this space he can just TAKE MY MONEY!

timcobb 4 hours ago | parent | prev | next [-]

whaaaat and this is the guy who coined "vibe-coding"? I am honestly pretty shocked reading this. I must be a fool or an idiot or both because I, for one, feel like suddenly I went from being a 1x developer to a 10x developer. Maybe 10x folks like Karpathy have it the opposite way?

ekropotin 8 hours ago | parent | prev | next [-]

If Karpathy feels behind, imaging how we, regular folks feel

furyofantares 7 hours ago | parent [-]

I've worked really hard over the last year at working out how to use these things, and it has more than paid off.

But I think if I had started learning today instead of a year ago, I'd get up to speed in more like 6 months instead of a year. A lot of stuff I learned a year ago is not really necessary anymore, but furthermore, there's just a lot more information out there about how to use these from people who have been learning it on their own.

I just don't think people who have ignored it up until now are really that far behind.

cherry_tree 2 days ago | parent | prev | next [-]

Behind who?

Is there someone already mastering “agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and a need to build an all-encompassing mental model for strengths and pitfalls of fundamentally stochastic, fallible, unintelligible and changing entities suddenly intermingled with what used to be good old fashioned engineering” ?

And do they have a blog?

lo_zamoyski 9 hours ago | parent [-]

> Behind who[m]?

Why, the other rats in front of you in the race, of course!

As the pithy, if cheese expression goes, read not the times; read the eternities. People who spend so much time frantically chasing superficial ephemera like this are people without any sense of life's purpose. They're cogs in some hellish consumerist machine.

lo_zamoyski 10 hours ago | parent | prev | next [-]

If you want to chase the mob off the cliff, go ahead. Insanity and stupidity aren't sound life strategies, though. They're a sign you have lost the plot.

wordsaboutcode 2 days ago | parent | prev | next [-]

i know how he feels :/

halfmatthalfcat 9 hours ago | parent [-]

Let go of your AI gods and embrace the abyss. We've survived for decades without them and will survive in spite of them.

dude250711 3 days ago | parent | prev | next [-]

Man, this is giving me a cognitive dissonance compared to my experiences.

Actually, even the post itself reads like a cognitive dissonance with a dash of the usual "if it's not working for you then you are using it wrong" defence.

credit_guy 3 days ago | parent | next [-]

I feel exactly like Karpathy here. I have some work to do, and I know exactly what I need to do, and I'm able to explain it to AI, and the AI seems to understand me (I'm lately using Opus 4.5). I wrote down a roadmap, it should take me a few weeks of coding. It feels like with a proper workflow with AI agents, this work should be doable in one or two days. Yet, I know by now that it's not going to be nearly that fast. I'll be lucky if I finish 30% faster than if I just code the entire damn thing myself. The thing is, I am a huge AI optimist, I'm not one of the AI skeptics, not even close. Karpathy is not an AI skeptic. We just both feel this sense of possibility, and the fact that we can't make AI help us more is frustrating. That's all. There's no telling anyone else "it's on you if you can't make it work for you". I think Karpathy figured out by now, and at least I did, that the number of AI skeptics by now far outnumbers the number of AI optimists, and it has become something akin to a political conviction. It's quite futile to try and change someone's mind about whether AI is good, bad, overhyped, underused, etc. People picked their side and that's that.

llmslave2 10 hours ago | parent | next [-]

I think you articulated perfectly why it's a bubble and why execs are so eager to push it everywhere. It's so alluring, it constantly feels like we're on the verge of something great. No wonder so many people have their brains fried by it.

anthonypasq 8 hours ago | parent [-]

we're 10 months into agentic coding. Claude code came out in march. I dont understand how you are so unimaginative to think what this might look like in 5 years even with slow progress.

llmslave2 8 hours ago | parent [-]

It might be genuinely useful in 5 years, my issue is how it's being marketed now. We're 6 months into "AI will be writing 90% of code in three months" among other ridiculous statements.

jennyholzer3 8 hours ago | parent | next [-]

I don't mean to be inflammatory but I am not at all convinced that LLMs will be useful for software development in 5 years!

I think LLMs are very well marketed but I don't think they're very good at writing code and I don't think they've gotten better at it!

llmslave2 7 hours ago | parent [-]

I sort of agree. If anything I feel like they've gotten a bit worse, but the advances in the tooling around them (eg claude code) has masked that slightly.

I think they are useful as an augmentation, but largely valueless for directly outputting code. Who knows if that will change. It's still made me more productive as a dev despite not oneshotting entire files. It's just not industry-changing, at least yet.

jeltz 7 hours ago | parent | prev [-]

Agreed. It is very similar to gambling in how it tricks the human mind. I am sure some of this AI technology will prove yo be useful but the breakthrough has been just around the corner since soon after ChatGPT was released.

design2203 3 days ago | parent | prev | next [-]

“We just both feel this sense of possibility, and the fact that we can't make AI help us more is frustrating”

The mirage is alluring.

nextworddev 3 days ago | parent [-]

The real mirage is the utility of median developers

jeltz 7 hours ago | parent | next [-]

I think with better processes and training they could be. It is just that right now we do not train them and put them through scrum and other horrible processes. Median developers are bad due to bad management.

jennyholzer3 8 hours ago | parent | prev [-]

give them better incentives

orwin 7 hours ago | parent | prev [-]

If I can reassure you, if your project is complex enough and involve heavy data manipulation, a 30% improvement using Opus/Gemini 3/codex 5.2 seems like a good result. I think on complex tasks, Opus 4.5 improves my output by around 20-25%.

And since it's way, way less wrong than sonnet4, it might also improve my whole team velocity.

I won't lie, AI coding has been a net negative for the 'lazy devs' on my team who don't delves into their own generated code (by 'lazy devs' here I mean the subset of devs who do the work but often don't bother to truly understand the logic behind what they used/did. They are very good coworkers, add velue and are not really lazy, but I don't see another term for that).

TeodorDyakov 3 days ago | parent | prev [-]

I think of it this way. If you dropped Einstein with a time machine two thousand year ago, people would think he is some crazy guy doing scribbles in the sand. No one would ever know how smart he is. The same is with people and advanced AGI like Gemini 3 Pro or Chatgpt 5.2 Pro. We are just dumber than them.

sponnath 3 days ago | parent | next [-]

Why do you think the models are AGI?

I also like to think that Einstein would be smart enough to explain things from a common point of understanding if you did drop him 2000 years in the past (assuming he also possesses the scientific knowledge humanity accrued in that 2000 year gap). So, your analogy doesn't really make a lot of sense here. I also doubt he'd be able to prove his theories with the technology of the past but that's a different matter.

If we did have AGI models, they would be able to solve our hardest problems (assuming a generous definition of AGI) even if we didn't immediately understand exactly how they got there. We already have a lot of complex systems that most people don't fully understand but can certainly verify the quality of. The whole "too smart for people to understand that they're too smart" is just a tired trope.

clayhacks 3 days ago | parent | prev | next [-]

You are certainly dumber than them if you think they are AGI. These models are smart and getting smarter, but they are not AGI.

billywhizz 8 hours ago | parent | prev | next [-]

> We are just dumber than them.

you are, for sure.

csto12 3 days ago | parent | prev [-]

You think they have “advanced AGI” and are worried about keeping up with the software industry? There would be be nothing to keep up with at that point.

To use an analogy, it would be like spending all your time before a battle making sure your knife is sharp when your opponent has a tank.

dnw 9 hours ago | parent | prev | next [-]

I have been using Copilot, Cursor, then CC for a little more than a year now. I have written code with teams using these tools and I am writing mostly for myself now. My observations have been the following:

1) These tools obviously improved significantly over the past 12 months. They can churn out code that makes sense in the context of the codebase, meaning there is more grounding to the codebase they are working on as opposed to codebases they have been trained on.

2) On the surface they are pretty good at solving known problems. You are not going to make them write well-optimized renderer or an RL algorithm but they can write run-of-the-mill business logic better _and_ faster than I can-- if you optimize for both speed of production and quality.

3) Out of the box, their personality is to just solve the problem in front of them as quickly as possible and move on. This leads them to make suboptimal decisions (e.g. solving a deadlock by sleeping for 2 seconds, CC Opus 4.5 just last night). This personality can be altered with appropriate guidance. For example, a shortcut I use is to append "idiomatic" to my request-- "come up with an idiomatic solution" or "is that the most idiomatic solution we can think of." Similarly when writing tests or reviewing tests I use "intent of the function under test" which makes the model output better solution or code.

4) These models, esp. Opus 4.5 and GPT 5.2, are remarkable bug hunters. I can point at a symptom and they come away with the bug. I then ask them to explain me why the bug happens and I follow the code to see if it's true. I have not come across a bad bug, yet. They can find deadlocks and starvations, you then have to guide them to a good fix (see #3).

5) Code quality is not sufficient to create product quality, but it is often necessary to sustain it. Sustainability window is shorter nowadays. Therefore, more than ever, quality of the code matters. I can see Claude Code slowly degrading in quality every single day--and I use it every single day for many hours. As much as it pains me to say this, compared to Opencode, Amp, and Toad I can feel the "slop" in Claude Code. I would love to study the codebases of these tools overtime to measure their quality--I know it's possible for all but Claude Code.

6) I used to worry I don't have a good mental model of the software I build. Much like journaling, I think there is something to be said about the process of writing/making actually gives you a very precise mental model. However, I have been trying to let that go and use the model as a tool to query and develop the mental model post facto. It's not the same but I think it is going to be the new norm. We need tooling in this space.

7) Despite your own experiences with these tools it is imperative that they be in your toolbox. If you have abstained from them thus far, perhaps best way to get them incorporated is by starting to use them for attending to your toil.

8) You can still handcraft code. There is so much fun, beauty and pleasure it in to deny doing it. Don't expect this to be your job. This is your passion.

flumpcakes 8 hours ago | parent [-]

> Despite your own experiences with these tools it is imperative that they be in your toolbox.

Why is it imperative? Whenever I read comments like this I just think the author is cynically drumming up hype because of the looming AI bubble collapse.

dnw 6 hours ago | parent | next [-]

Fair question. It is "imperative" for two reasons. The first, despite having rough edges now, I find these tools be actually useful so they are here to stay. The second, I think most developers will use them and make them part of their toolchain. So, if one wants to be in parity with their peers then it stands to reason they adopt these tools as well.

In terms of bubbles: Bubbles are economic concepts and they will burst but the underlying technology find its market. There are plenty of good open source models and open source projects like OpenCode/Toad that support them. We can use those without contributing (too much) to the bubble.

kakapo5672 6 hours ago | parent | prev [-]

There's a financial AI bubble for sure - that's pretty much a mainstream opinion nowadays. But that's an entirely different thing from AI itself bubble-collapsing.

If you truly believe AI is simply going to collapse and disappear, you are deep in some serious cope and are going to be unpleasantly surprised.

ldng 2 days ago | parent | prev | next [-]

Yeah. OR. You just ignore the bullshit until the bubble burst. Then we'll see what's left and it will not be what the majority think.

tayo42 10 hours ago | parent | next [-]

There seems to be a lot of churn, like how js was. We can just wait and see what the react of llms ends up being.

falcor84 a day ago | parent | prev [-]

The "bubble" is in the financial investment, not in the technology. AI won't disappear after the bubble bursts, just like the web didn't disappear after 2000. If anything, bursting the financial bubble will most likely encourage researchers to experiment more, trying a larger range of cheaper approaches, and do more fundamental engineering rather than just scaling.

AI is here to stay, and the only thing that can stop it at this stage is a Butlerian jihad.

design2203 10 hours ago | parent | next [-]

AI has been here long before LLM’s… also I dislike the people seemingly tying the two terms together as one.

ldng 19 hours ago | parent | prev | next [-]

I maintain, the web today is not what people though it would be in 1998. The tech has it's uses, it's just not what snake oil sellers are making it to be. And talking about Butlerian jihad is borderline snake oil selling.

falcor84 14 hours ago | parent [-]

Interesting. What particular 1998 claims do you have in mind that were not (at least approximately) fulfilled?

wiseowise 8 hours ago | parent | prev | next [-]

Not even Butlerian Jihad will stop the current progress at this point.

lo_zamoyski 6 hours ago | parent [-]

Resistance if futile, eh?

lo_zamoyski 6 hours ago | parent | prev [-]

Borg logic consists of framing matters of choice as "inevitable". As long as those with power convince everyone that technological implementation is "inevitable", people will passively accept their self-serving and destructive technological mastery of the world.

The framing allows the rest of us to get ourselves off the hook. "We didn't have a choice! It was INEVITABLE!"

And so, we have chosen.

falcor84 6 hours ago | parent [-]

But history shows that it is inevitable. Can you give me an example of a single useful technology that humans ever stopped developing because of its negative externalities?

> "We didn't have a choice! It was INEVITABLE!"

There is no "we". You can call it the tragedy of the commons, or Moloch, or whatever you want, but I don't see how you can convince every single developer and financial sponsor on the planet to stop using and developing this (clearly very useful) tech. And as long as you can't, it's socially inevitable.

If you want a practice run, see if you can stop everyone in the world from smoking tobacco, which is so much more clearly detrimental. If you manage that, you might have a small chance at stopping implementation of AI.

andrekandre 3 hours ago | parent [-]

  > see if you can stop everyone in the world from smoking tobacco
this is a logical fallacy i think; nobody needs to stop tobacco full-stop, but we have been extremely successful at making it less and less incentivized/used over time, which is the goal...

[1] https://www.lung.org/research/trends-in-lung-disease/tobacco...

globular-toast 9 hours ago | parent | prev | next [-]

I don't usually post something like this, but this is so fucking stupid. I'm prepared to stand by that. Let's see in a few years if I'm right.

"AI" is literally models trained to make you think it's intelligent. That's it. It's like the ultimate "algorithm" or addiction machine. It's trained to make you think it's amazing and magical and therefore you think it's amazing and magical.

zmmmmm 3 hours ago | parent | next [-]

Sure, but there's no reason there can't be a correlation between us "thinking" it's intelligent and it actually being intelligent. What other proxy should we use? I can't think of a scenario where it's actually intelligent but humans don't think it is that has a good practical ending. It's at least necessary even if it isn't sufficient.

viraptor 9 hours ago | parent | prev | next [-]

This could apply if we looked at questions in vacuum - someone had a conversation and was judging the models based on that. But some of us just use it for work and get good results daily. "Intelligent" is irrelevant; it's "useful". It doesn't matter what feelings I have about it if it saves me 2h of typing from time to time.

chasd00 8 hours ago | parent [-]

To me, as just another kinda old (I’m 49) swe, the biggest benefit of using an LLM tool is it saves a shit ton of typing. I know what I want and I know when it’s right, just saving me from typing it all out is worth $20 bucks a month.

kakapo5672 6 hours ago | parent | prev | next [-]

Recently I needed to summarize about a thousand lengthy documents, and then translate those summaries into Mandarin.

I spent about a minute composing the prompt for this task, and then went for a cup of coffee. When I got back the task was done. I spot-checked the summaries and they were excellent.

I thought this was amazing and magical at the time. Am I wrong? Or is it simply the AI making me think this result was amazing and magical?

leecommamichael 9 hours ago | parent | prev | next [-]

It’s trained to (lossy) compress large amounts of data. The system prompts have leaked and it’s just instructed to be helpful, right? I don’t entirely disagree with your sentiment, though. It’s brute force.

jennyholzer3 7 hours ago | parent | prev | next [-]

The system prompt may vary but:

"It's trained to make you think it's amazing and magical and therefore you think it's amazing and magical."

is the dark pattern underlying the entire LLM hype cycle IMO.

heliumtera 6 hours ago | parent | prev | next [-]

Congratulations on that one!

Now that you have unlocked this secret, you're cursed forever. They look at the machine and say: hey, look, the machine is just like me! You're left confused for the best part of 3 years and then you start realizing it was true all along...they are..very much similar to the machine. For a moment we were not surprised by how capable the machine was at reasoning. And then it dawned on us, the machine had human level intelligence and cognition from the beginning, just from a slightly different perspective.

yacthing 6 hours ago | parent | prev [-]

'"AI" is literally models trained to make you think it's intelligent.'

What's the difference? I try to make people think I'm intelligent all the time.

bopbopbop7 2 hours ago | parent [-]

Weird self roast but okay.

oakpond 3 days ago | parent | prev | next [-]

> There's a new programmable layer of abstraction to master (in addition to the usual layers below) involving agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and a need to build an all-encompassing mental model for strengths and pitfalls of fundamentally stochastic, fallible, unintelligible and changing entities suddenly intermingled with what used to be good old fashioned engineering.

Slop-oriented programming

alexcos 2 days ago | parent | prev | next [-]

"I've never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between. I have a sense that I could be 10X more powerful if I just properly string together what has become available over the last ~year and a failure to claim the boost feels decidedly like skill issue. There's a new programmable layer of abstraction to master (in addition to the usual layers below) involving agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and a need to build an all-encompassing mental model for strengths and pitfalls of fundamentally stochastic, fallible, unintelligible and changing entities suddenly intermingled with what used to be good old fashioned engineering. Clearly some powerful alien tool was handed around except it comes with no manual and everyone has to figure out how to hold it and operate it, while the resulting magnitude 9 earthquake is rocking the profession. Roll up your sleeves to not fall behind."

thomasfromcdnjs 2 days ago | parent | prev | next [-]

I have been telling everybody I know over the Christmas break that I have been coding from around 10-36 years of age, as a career and always in my spare time as a hobby. I have a lacklustre computer science knowledge and never worked at the scale of FANG etc but am still rather confident in my understanding of code and the tech scene in general. I've been telling people I haven't "coded" for almost 6 months now, I only interface with agentic setups and only open my IDE to make copy and config changes.

I understand we are all in different camps for a multitude of reasons;

- The jouissance of rote coding and abstraction

- The tree of knowledge specifically in programming, and which branches and nodes we each currently sit at in our understanding

- Technical paradigms that humans may have argued about have now shifted to obvious answers for agentic harnesses (think something like TDD, I for one barely used that as a style because I've mostly worked in startups building apps and found the cost of my labour not worth it, but agentic harnesse loops absolutely excel at it)

- The geography and size of the markets we work in

- The complexity of the subject matter / domain expertise

- The cost prohibitive nature of token based programming (not everyone can afford it, and the big fish seemingly have quite the advantage going fourth)

- Agentic coding has proven it can build UI's very easily, and depending on experience, it can build a very very many things easily. it excels in having feedback loops such as linting or simple javascript errors, which are observability problems in my opinion. Once it can do full stack observability (APM, system, network), it's ability to reason and correct problems on the fly for any complex system seems overly easy from my purvue.

- At the human nature level, some individuals prefer to think in 0's and 1's, some in words, some inbetween, and so on, what type of communication do agentic setups prefer?

With some of that above intuition that is easily up for debate, I've decided to lean 100% into agentic coding, I think it will be absolutely everywhere and obviously with humans in the loop but I don't think humans will need to review the pull requests. I am personally treating it as an existential threat to my career after having seen enough of what it's capable of. (with some imagination and a bit of a gambling spirit, as us mere mortals surely can't predict the future)

With my gambit, I'm not choosing to exit the tech scene and instead optimistically investing my mental prowess into figuring out where "humans in the loop" will be positioned. Currently I'm looking into CI level tooling, the known being code quality, and all the various forms of software testing paradigms. The emerging evals in my mind will keep evolving and beyond testing our ideas of model intelligence and chat bot responses will do a lot more.

---

A more practical rant: If you are building a recommendation engine for A and B, the engine could have X amount of modules that return a score which when all combined make up the final decision between A and B. Forgive me but let's just use dating as an example. A product manager would say we need a new module to calculate relevance between A and B based off their food preferences. An agentic harness can easily code that module and create the tests for it. The product manager could ask an LLM to make a list of 1000 reasons why two people might be suitable for dating. The agent could easily go away and code and test all those modules and probably maintain technical consistency but drift from the companies philosophical business model. I am looking into building "semantic linting" for codebases, how can the agent maintain the code so it aligns with the company's business model. And if for whatever reason those 1000 modules need to be refactored, how can the agent maintain the code so it aligns with the company's business model. Essentially trying to make a feedback loop between the companies needs and the code itself. To stop the agent and the business from drifting in either directions, and allowing for automatic feedback loops for the agent to fix them. In short, I think there will be new tools invented that us human's will be mastering as to Karpathy's point.

anothereng 5 hours ago | parent [-]

interesting how can I go into building Agents? I have the kiro IDE for a project but how can I make sure what they're doing is correct? Right now i'm just vibecoding or using the more detailed requirements path but I haven't used coding Agents because I actually don't get how does the feedback loop work with them

nen-nomad 7 hours ago | parent | prev [-]

Claude Code didn’t make me faster. It changed the calendar. What used to take me months now takes weeks. Work didn't vanished, the friction did.

Two years ago I was a human USB cable: copy, paste, pray. IDE <-> chat window, piece by piece. Now the loop is tighter. The distance is shorter.

There’s still hand-holding. Still judgment. Still cleanup. But the shift is real.

We’ve come a long way. And we’re not done.

JDye 6 hours ago | parent [-]

Can't even write a comment without an LLM...