Remix.run Logo
Let's talk about LLMs(b-list.org)
110 points by cdrnsf 6 hours ago | 73 comments
jwpapi 15 minutes ago | parent | next [-]

Its the biggest swindle...

You could fetch some unfinished github repos or download free templates. It’s actually faster than LLMs, still no body would do it.

I don’t start my project with the ecommerce nextjs starter repo. I build it from scratch, because it’s faster...

mfro 4 hours ago | parent | prev | next [-]

I think you're misunderstanding the paradigm shift completely -- AI does not just generate code N(x) more quickly. It thinks N(x) faster, it researches N(x) faster, it tests N(x) faster. There are hundreds of tasks that you'll find engineers are offloading to AI every day. The major hurdle right now is actually pivoting LLMs from just generating code: integrating those tasks into workflows. This is why tool-use and agentic workflows have taken engineering by storm.

michaelchisari 3 hours ago | parent | next [-]

Debugging, sanity checking, testing, etc. are the best uses of LLMs. Much better than writing code.

Developers should write their own code and use LLMs to design and verify. Better, faster architecture and planning, pre-cleaned PRs and no skill atrophy or loss of understanding on the part of the developer.

jb1991 2 hours ago | parent | next [-]

Funny, I have the complete opposite impression after using claude code for a while. I would never trust it to design anything. Never again. But it can code pretty well given a very tight and limited scope.

michaelchisari 2 hours ago | parent [-]

To clarify, AI should not do the design itself. You develop the design in conversation with AI.

I come in knowing what I need to build and at least one idea or more of how it should be done. I present the problem, constraints, potential solutions, and ask for criticisms and alternatives. I can keep it as broad as possible or I can get more granular like struct layouts, api endpoints, etc. I go back and forth until there's an approach I prefer and then I code that approach.

| it can code pretty well given a very tight and limited scope.

It's wildly better at tight and limited scope than large scale changes but even then I would rather code it myself.

radarsat1 2 hours ago | parent | next [-]

> It's wildly better at tight and limited scope than large scale changes but even then I would rather code it myself.

One thing I would like to see is the use of LLMs for smarter semi-manual editing.

While programming I often need to make very similar changes in several places. If the instances are similar enough I can get away with recording a one-off keyboard macro to repeat, but if there are differences that are too difficult to handle this way I end up needing to do a lot of manual editing.

It would be nice to see LLMs tightly integrated into the editor so I can do a simple "place the cursor at things like this" based on an example or two. I'm sure more ideas for using LLMs more quickly perform semantic changes you intended are possible, instead of just prompting for a big diff. I feel there's a lot more innovation possible in this direction, where you're still "coding it yourself" but just faster.

empthought a minute ago | parent | next [-]

You should try using the existing agents for your semi-manual editing. You don't need editor support. The coding agent can find "things like this" faster than you can.

strange_quark 2 hours ago | parent | prev [-]

I've had a similar thought. A super refactor feature would be amazing, but wouldn't fit into the current zeitgeist of agent everything. Hopefully as the hype starts to die down and prices go up, we'll get some of these smaller, more targeted features.

skydhash an hour ago | parent | prev [-]

> I come in knowing what I need to build and at least one idea or more of how it should be done. I present the problem, constraints, potential solutions, and ask for criticisms and alternatives

Never understood that argument. Because there’s two steps in design. Finding a good solution (discussing prior art, tradeoffs,…) and then nailing the technical side of that solution (data structures, formula,…). Is it the former, the latter or both?

dyauspitr 2 hours ago | parent | prev [-]

They’re actually really good at both. Writing code and all the paraphernalia around it.

oytis 2 hours ago | parent | prev | next [-]

The article addresses exactly this objection. Most importantly, it quotes that AI coding tools have a detrimental effect on software stability - which is basically raison d'etre for our profession. When it produces more robust software and handles on-call shifts better than humans, I will consider programming done.

tptacek 2 hours ago | parent [-]

I'm excited to read the first cogent piece making this point that doesn't devolve to gatekeeping, a detached and vaguely hostile professional software developer telling people with a newfound capability to solve practical problems for themselves with new software that they don't or shouldn't want the thing that they want, because whatever it is they come up with won't be "fit for purpose" until blessed by the guild, which has bylaws extrapolated from Brooks about the fundamental "limitations of LLMs".

oytis 2 hours ago | parent | next [-]

I am less sure about his argument about democratising software indeed. The only problem in my own life that I solve with software is a problem of getting paid, so what do I know. If someone can generate a piece of code for their needs, and they don't risk harming anyone but themselves, then it's a great application of LLMs.

ekidd an hour ago | parent | prev | next [-]

The unfortunate reality is that a lot of software does have hard constraints. And a lot of these constraints are "gatekept" by regulators, compliance policies, insurance companies, etc. If someone slops together a medical record system, and leaks a bunch of PHI, there will be consequences, even in the US. Similarly, good luck getting insurance against cyber attacks without a SOC2 audit or equivalent.

I've had this conversation with managers in multiple organizations this year: "Yes, you could totally vibe code that instead of paying for a SaaS. But you have strict contractual and professional obligations about data security. Do you want to be deposed and asked, 'So, did you really just vibe code the system that led to the data leak? Did the vibe coders have any professional qualifications? Did they even look at the code?'"

Similarly, a backend server that handles 8 million users a day is expected to stay up.

Now, there are 10,000 things that have less demanding requirements. I'm actually really delighted that people are able to vibe code their own tools with minimal knowledge of software engineering! We have been chronically underproducing niche software all along.

But if your software already has on-call shifts (and SLAs, etc) like the GP, then I think you want to be smart about how you combine human expertise with LLMs.

tptacek 37 minutes ago | parent | next [-]

OK, I have no idea who you are, and this isn't personal, I'm responding to a comment and not a person --- but this is an argument that posits that one of the big problems with LLM software is "SOC2 audits". Since SOC2 audits are basically not a meaningful thing, I'm left wondering if the rest of your argument is similarly poorly supported.

It feels like a dunk to write that. But I genuinely do think there's so much motivated reasoning on both sides of this issue, and one signal of that is when people tip their hands like this.

yellowapple 5 minutes ago | parent [-]

Since when are SOC audits not a meaningful thing?

kasey_junk a minute ago | parent [-]

If soc audits are driving your development process you are doing it backwards. And _certainly_ a time is coming when just using the llm will be soc compliant.

skydhash an hour ago | parent | prev [-]

That’s why the biggest proponent of LLM tooling are managers and entrepreneurs (aka people that are incentivized to reduce costs due to salary costs). But anyone that has to keep the system running and doesn’t want to wake up in the middle of the night is rightly cautious.

cfloyd an hour ago | parent | prev [-]

Nailed it

imiric 2 hours ago | parent | prev | next [-]

> The major hurdle right now is actually pivoting LLMs from just generating code: integrating those tasks into workflows.

Funny, I thought that the major hurdle is improving accuracy and reliability, as it's always been. Engineering is necessary and useful, but it's a much simpler problem, which is why everyone is jumping on it.

paganel 2 hours ago | parent | prev | next [-]

> , it tests N(x) faster.

It does? You mean "it tests itself faster", which is not really a test now, is it?

cfloyd an hour ago | parent | next [-]

I use one model for coding and another writing tests for that very reason. It’s surprisingly good at TDD

kefirlife an hour ago | parent | prev [-]

I read that to mean you can arm it with a harness that you design informing the user that tests pass. A LLM can leverage this to run tests faster than I would run the same harness myself. You can then have any programmatic logic needed to support that usage sufficient to cover your use case and have a degree of certainty that the product at least passed those tests.

brcmthrowaway 4 hours ago | parent | prev | next [-]

True. Knowledge workers are cooked.

pingou 4 hours ago | parent | prev | next [-]

Not sure why you are downvoted but I agree. Additionally, perhaps LLMs are just like another higher programming language as the author said, and they still need someone to steer them.

I'm sure it was very difficult to program in machine code, but if now (or soon) anyone can just write software using a LLM without any sort of learning it changes everything. LLMs can plan and create something usable from simple instructions or ideas, and they will only get better.

I think LLMs will be (and already are) useful for many more things than programming anyway.

smartmic 3 hours ago | parent | next [-]

> I'm sure it was very difficult to program in machine code, but if now (or soon) anyone can just write software using a LLM without any sort of learning it changes everything. LLMs can plan and create something usable from simple instructions or ideas, and they will only get better.

Did you read the section "Power to the People?" ? In it, the author dismantles your thesis with powerful, highly plausible arguments.

hombre_fatal 28 minutes ago | parent | next [-]

I read that section but I disagree with it.

1. You don't have to be an LLM expert to get good, consistent results with LLMs.

My best vibe-code process after years of using LLMs is to have Claude Code create a plan file and then cycle it through Codex until Codex finds nothing more to review, then have an agent implement it. This process is trivial yet produces amazing results.

It's solved by better and better harnesses.

2. You don't have to write technical specs. The LLM does that for you. You just tell it "I want the next-tab button to wrap back to the first one" and it generates a technical plan. Natural language is fine.

3. Software that seems to work only to fail down the line in production is already how software works today. With LLMs you can paste the stacktrace or user bug email and it will fix it.

This is why vibe-coding works. Instead of simulating how an app will run in your head looking at its code, you run the app and tell the LLM what isn't working correctly. The app spec is derived iteratively through a UX feedback look.

4. I don't understand TFA's goalposts, but letting people create software that are only interested in the LLM process (rather than the software craftsmanship) would be a huge democratization of software.

mfro 3 hours ago | parent | prev | next [-]

While I think the author is entirely right about 'natural language programming' in the current day, if LLMs (or some other AI architecture) continue to improve, it is easy to believe touching code could become unnecessary for even large projects. Consider that this is what software co. executives do all the time: outline a high level goal (software product) to their engineering director, who largely handles the details. We just don't yet know if LLMs will ever manage a level of intelligence and independence in open-ended tasks like this. And, to expand on that, I don't know that intelligence is necessarily the bottleneck for this goal. They can clearly tackle even large engineering tasks, but often complaints are that they miss on important architectural context or choose a suboptimal solution. Maybe with better training, context handling, documentation, these things will cease to be problems.

pingou 2 hours ago | parent | prev [-]

I have indeed missed the arguments that are so powerful that they dismantles my thesis.

Would there even be a debate in the tech community if such unassailable arguments existed? The author is entirely entitled to his opinion, just as I am allowed to disagree with him (not sure why I am also downvoted). The good thing is, if I'm right, we will see it in less than 10 years.

fragmede 3 hours ago | parent | prev [-]

> they will only get better.

I don't buy that's true. The "only" part, anyway. Look at how UX with software has evolved. This is gonna be an old man yells at clouds take, but before smartphones, there were hotkeys. And man, you could fly with those things. The computers running things weren't as fast as they are today, but you could mash in a a whole sequence thru muscle memory, and just wait for it to complete. Now, you have to poke at your phone, wait for it to respond, poke at it some more. It's really not great for getting fast at it. AI advancement is going to be like that. Directionally generally it will be better, but there's going to be some niche where, y'know what, ChatGPT-4o really had it in a way that 5.5 does not. (Rose colored glasses not included.)

dgellow 3 hours ago | parent | prev [-]

Claude connected to a Postgres (readonly obviously) and Datadog MCP servers in addition to access to the codebase can debug prod issues so quickly. That’s easily a 10x win compared to a senior engineer doing the exact same debugging steps. IMHO that’s where the actual productivity boost is

kelnos 3 hours ago | parent | prev | next [-]

>> Within just this group the ratios between best and worst performances averaged about 10:1 on productivity measurements and an amazing 5:1 on program speed and space measurements!

> (although I’m personally skeptical of the “10x programmer” concept, the software industry overall does seem to accept it as true)

To be fair, this statement from Brooks doesn't entirely match with the "10x programmer" we talk about. My take on it is when someone says "10x programmer" today, they mean 10x more productive than the average, not 10x more productive than the worst. Brooks' statement is about the latter. If he'd looked at the difference between average and best, I would assume you'd get something more like a 2x or 4x programmer.

atleastoptimal 3 hours ago | parent | prev | next [-]

"LLM's Aren't Going to Fundamentally Change Software Development" Says Increasingly Nervous Man For Seventh Time This Year

slopinthebag 2 hours ago | parent [-]

I didn't get the sense that the author is nervous. What I tend to see are people who are nervous that going all-in on LLM workflows might not have the payoff they are expecting, and are becoming increasingly fanatical as a result.

Just one more harness bro. Just one more agentic swarm. Please bro, just one more Claude Max subscription. Please bro.

atleastoptimal 25 minutes ago | parent | next [-]

Complaining about every one off issue with LLM's ignores the bigger picture: they are getting better every month and there is no fundamental reason why they wouldn't surpass humans in coding. Everything else is secondary.

All I would need from an LLM doubter is evidence that at tractable software engineering task LLM's are not improving. The strongest argument against the increasing general capabilities of LLM's are the ARC-AGI tasks, however the creators admit that each generation of LLM's exceed their expectations, and that AGI will be achieved within the decade.

aspenmartin 2 hours ago | parent | prev [-]

You say this as though performance has not followed a very clear and extremely rapid improvement in a startlingly short amount of time.

You’re definitely right that people adopt agentic workflows and are disappointed or worse, but the point is the disappointment has already reduced substantially and will continue to do so. We know this because we know the scaling laws, and also because learning theory has been around for many decades.

strange_quark 44 minutes ago | parent | next [-]

What rapid improvement has occurred, because in this six month AI coding fever dream we've been living in, I really haven't seen anything new in awhile, both in terms of new ideas for AI coding or in new consumer products or services.

I'll give you the coding harnesses themselves are better because that was a new product category with a lot of low-hanging fruit, but have the models actually improved in a way that isn't just benchmaxxing? I'd argue the models seem to be regressing. Even the most AI-pilled people at my company have all complained that Opus 4.7 is a dud. Anecdotally, GPT 5.5 seems decent, but it's rumored to be a 10T parameter model, isn't noticeably better than 5.4 or 5.3, is insanely expensive to use, and seems to be experiencing model collapse since the system prompt has to beg the thing to not talk about goblins and raccoons.

jatora 5 minutes ago | parent [-]

Uninformed opinion of someone who clearly doesnt consistently use AI coding tools, clearly. And why are you limiting it to 6 months? Whats wrong with you?

cyclopeanutopia 2 hours ago | parent | prev | next [-]

Perhaps you are confusing performance with instability?

paganel 2 hours ago | parent | prev | next [-]

> very clear and extremely rapid improvement in a startlingly short amount of time.

We're almost 6 months into all this AI-code madness and I've yet to see that "rapid improvement" you mention. As in software products that are genuinely better compared to 6 months ago, or new software products (and good software products at that) which would have not existed had this AI craze not happened.

slopinthebag 2 hours ago | parent | prev [-]

Yes but we don't know the shape of the curve and where we are on it.

dabedee 4 hours ago | parent | prev | next [-]

It was a welcome change to have a deliberate, well thought, and well-written article that tries to bring readers through a rational journey. Thank you

empthought 8 minutes ago | parent | prev | next [-]

> But ultimately, the only situation in which LLMs could meaningfully democratize access to software development is one where they achieve a true silver bullet, by significantly reducing or removing essential difficulty from the software development process.

The author didn't seem to read the Brooks essay for comprehension. There is an entire section about expert systems that foreshadows agents. While there is no singular silver bullet, Brooks explores the most promising techniques to reduce essential complexity that were anticipated in 1986.

> The most powerful contribution of expert systems will surely be to put at the service of the inexperienced programmer the experience and accumulated wisdom of the best programmers. This is no small contribution.

Furthermore, his objection to automatic programming was simply an argument from incredulity, which is an understandable opinion at the time, yet quite vacuous in hindsight.

smartmic 4 hours ago | parent | prev | next [-]

If you're interested in Fred Brooks's "No Silver Bullet," I also explored it in the context of LLMs: https://smartmic.bearblog.dev/no-ai-silver-bullet/

js8 an hour ago | parent [-]

In fact, AI might be the opposite of managerial "silver bullet". The more we automate what is repetitive, the less predictability remains overall. Things can get more productive on average but the managing it becomes harder, as productivity amplifies risks.

ilia-a 3 hours ago | parent | prev | next [-]

Even without writing code LLMs are a huge help, analyzing code, doing code reviews, documenting code, etc... Even without writing a line of "code" LLM hugely speed up development and take away the annoying/boring work.

riknos314 16 minutes ago | parent | next [-]

In pretty much every case where I've previously thought "I wish we had a tool for this but I can't get the time funded to make it" I now just get ai to work on the tool in the background and check in on it whenever I have a few minutes of deadtime before / after meetings.

The benefits of the time savings of having progressily better tooling over time add up quickly.

nijave an hour ago | parent | prev [-]

Been using Claude Code for cost ops and reporting at work and it's saved an insane amount of time. I can generate a report in 10-15 minutes that would have taken 2-3 days of scripting/SQL and CC can even spit out a script to repro later.

It's not terribly hard to check either. You can do some spot checks with cost dashboards in AWS, Datadog, etc and see if the numbers line up

Can also tell Claude "go right size the environment, pull p95 usage metrics for the last 3 months" and a couple hours later, a bunch of money is saved. Much easier than manually pulling trend data and also easier than installing/configuring/managing tools that do it for you.

mwaddoups 3 hours ago | parent | prev | next [-]

This was a great read - thanks so much for taking the time to write this. Well researched and thought provoking. Long live the em dash.

trwhite 3 hours ago | parent | prev | next [-]

A well researched and written piece

slopinthebag 3 hours ago | parent | prev | next [-]

I really enjoyed this article, it's well written and does a good job of dismantling the flawed arguments by language model maxis' while presenting a more realistic outlook on where we are now and where we are going.

I think the biggest benefit language models have provided me is in the auxiliary aspects to programming: search, debugging, rubber ducking, planning, refactoring. The actual code generation has been mixed.

I had an LLM try and implement a fairly involved feature the other day, providing it with API spec details, examples from other open source libraries, and plenty of specifications. It's also something readily available in training data as well, but still fairly involved.

On first glance it looked great, and had I not spent the time to investigate deeper I would have missed some glaring deficiencies and omissions that render its implementation worthless. I am now going back and writing it by hand, but with language models providing assistance along the way, and it's going much better.

I think people are being unrealistic by thinking that the usage of language models in their side projects represent something broader. It's almost the perfect situation for language models: small, greenfield code bases, no review, no responsibility, and no users. It goes up on GitHub with a pretty readme, and then off to social media where they post about how developers are "cooked". It's just not a very realistic test.

In the end we will probably see large productivity increases by integrating language models, but they won't be replacing developers but rather augmenting them.

senko an hour ago | parent | prev | next [-]

The accidental vs essential difficulty argument ignores the fact that you can abstract away (some) essential difficulty if you're willing to take a performance hit.

Design patterns in an older (programming) language become core language features in a newer one. As we internalize and abstract away the best patterns for something, it becomes accidental but it's only obvious in retrospect.

The article quotes Brooks (quoting Parnas) about just that (later, in context of LLMs):

> automatic programming always has been a euphemism for programming with a higher-level language than was presently available to the programmer. [...] Once those accidents have been removed, the remaining ones are smaller, and the payoff from their removal will surely be less.

Considering this was written when C was the hot new stuff, let's compare the ability to code a CRUD web app in Python/Django vs C. What Brooks and Parnas are saying that Python/Django cannot bring big improvements in building a CRUD web app when compared to C because they can only make it easier to program, reducing accidental complexity. But we've since redefined "accidental" and I would argue that you can write a CRUD web app in Python/Django at least 100x faster than in C (and probably at least 100x more secure), although it may take 1000x as more CPU and RAM while running.

So "we removed most of the accidental difficulties and the most that remains is essential" is a kind of "end of history" argument.

> I’d be surprised if there’s even a doubling of productivity still available from a complete elimination of remaining accidental difficulty.

It's good that this statement has a conditional subjective guard, because that's just punditry.

> LLM coding does not represent a silver bullet

Here I agree with the author completely, but probably not for the same reasons. The definition of "silver bullet" the article uses (quoting Brooks):

> There is no single development, in either technology or management technique, which by itself promises even a single order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity.

AI-assisted development is not a single technique, the same way "devops" or "testing" or "agile" is not a single technique. But more importantly, I agree it will take time to find best practices, for the technology change to slow down, and for the best approaches to diffuse across the industry.

The article's conclusion:

> You should be adopting and perfecting solid foundational software development practices like version control, comprehensive test suites, continuous integration, meaningful documentation, fast feedback cycles, iterative development, focus on users, small batches of work… things that have been known and proven for decades, but are still far too rare in actual real-world software shops.

These are great and I'm gonna let him/her finish, but it's curious actual coding isn't mentioned anywhere. The author doesn't suggest "polish your understanding of C pointer semantics" or "Rust ownership model" or "Django ORM" or to really, deeply, understand B-trees. Looks like pedestrian detailes like those are left as an excercise for the reader ... or the reader's LLM.

AIorNot 4 hours ago | parent | prev | next [-]

the problem with this article is that he is right of course, but only right now. There is no reason to believe that future AI platforms won't be able to review code themselves and manage some aspects of themselves with minimal human oversight, yes we likely will always need a few experts

I'm reminded of this scene from the Matrix: https://www.youtube.com/watch?v=cD4nhYR-VRA where the older wise man discusses societies reliance on AI

"Nobody cares how it works, as long as it works"

We're done. I for one welcome our new AI Overlords, or more accurately still welcome the tech bro billionares who are pulling the strings

frizlab 3 hours ago | parent | next [-]

> There is no reason to believe that future AI platforms won't be able to review code themselves and manage some aspects of themselves with minimal human oversight

There are, IMHO, fewer reasons to believe they will be able to do that rather than not, though.

CamperBob2 2 hours ago | parent [-]

LLMs became much better at both reviewing and writing code over the last 12-18 months. Did you?

The current state of the art is irrelevant. Only the first couple of time derivatives matter.

paulhebert 2 hours ago | parent [-]

> Did you?

I would say I got better at both of those over the last 12-18 months. Are your skills static?

CamperBob2 an hour ago | parent | next [-]

Compared to Claude or GPT 5.5? Yeah, my skills are static relative to the progress seen recently. So are yours, unless your grandpa was named von Neumann or Szilard.

eiekeww 2 hours ago | parent | prev [-]

My brain got better at thinking deeper when I stopped using llms.

Lmao why does it seem outlandish to other people? Perhaps they never thought too deeply in the first place to recognise it.

slopinthebag 2 hours ago | parent | prev [-]

> There is no reason to believe that future AI platforms won't be able to review code themselves and manage some aspects of themselves with minimal human oversight

Really? That's like someone during an economic boom saying "The economy is the worst it'll ever be. There is no reason to expect things to not continue to improve".

keybored 4 hours ago | parent | prev | next [-]

I have no stake in Fred Brooks. But No Silver Bullet seemed to be taken as gospel on this board. Sufficiently productivity-enhancing technology? Gimme a break man. Maybe you’ll get a 30% boost. Not a 10X boost.

Until recently. dramatic pause

And then AI happened.

taormina 3 hours ago | parent [-]

Great! So all of this 10x boosting is visible in which economic indicator?

slopinthebag 2 hours ago | parent [-]

Debt.

stackghost 4 hours ago | parent | prev | next [-]

Let's actually not talk about LLMs.

I honestly couldn't force myself to finish yet another blog post about how "we're not yet sure what impact LLMs will have on society" or whatever beleaguered point the author was attempting to make.

"Some random person's take on LLMs" was maybe interesting in 2024. Today it is not even remotely interesting.

There are a gazillion more interesting things happening today that ought to be of interest to the median HN reader. Can we talk about those instead?

jubilanti 3 hours ago | parent | next [-]

I'm confused. If you don't want to talk about LLMs then why didn't you just flag the post and move on? Submit something interesting, upvote and comment on interesting posts, instead of feeding the engagement on this thread.

It sounds like you actually do want to talk about how much you don't want other people to talk about LLMs.

famouswaffles an hour ago | parent | next [-]

You're not supposed to flag a post for something like that. Ideally you downvote and move on if you feel that strongly about it. Flagging is meant to be reserved for stuff that breaks the rules or guidelines.

WolfeReader an hour ago | parent [-]

Stories can't be downvoted.

stackghost 2 hours ago | parent | prev [-]

Oh, I definitely flagged the post also.

mettamage 4 hours ago | parent | prev [-]

I am an AI engineer and I honestly agree. Talking about LLMs feels like the new crypto, with some nuances (i.e. many innovative things being possible and done with LLMs whereas crypto innovations were… few and far between).

dijksterhuis 4 hours ago | parent | next [-]

it’s felt like the new crypto to me for about 2-3 years now.

i was doing an ML Sec phd a year or two before all this hype took off. i took one of the OG transformer papers along to present at our official little phd reading group when the paper was only a few months old (the details of this might be a bit sketchy here, was years ago now).

now i want nothing to do with the field in any way shape or form. i’m just done.

edit -- i got incredibly angry after writing this comment. pure hatred and spite for all the charlatans and accompanying bullshit.

eiekeww 2 hours ago | parent [-]

Sadly investing is all about making money… you should be more pissed at the naive people who have contributed to the effort and in particular those who don’t care about truth, but about cash flow potential.

keybored 4 hours ago | parent | prev [-]

Tedious LLM discourse isn’t aimed at AI engineers. It’s doomscrolling fodder for regular programmers.

gizajob 4 hours ago | parent | prev | next [-]

Actually can we not thanks.

cadamsdotcom 3 hours ago | parent | prev [-]

> If its two empirical premises—that the accidental/essential distinction is real and that the accidental difficulty remaining today does not represent 90%+ of total—are true, then the conclusion which rules out an order-of-magnitude gain from reducing accidental difficulty follows automatically.

The article goes on to assume there’s no 10x gain to be had but misses one big truth.

Needing to type the code is an enormous source of accidental difficulty (typing speed, typos, whether you can be arsed to put your hands on the keyboard today…) and it is gone thanks to coding agents.