Remix.run Logo
timr 2 hours ago

> A lot of what one previously needed a SWE to do can now be brute forced well enough with AI. (Granted, everything SWEs complained about being tedious.)

Only if you ignore everything they generate. Look at all the comments saying that the agent hallucinates a result, generates always-passing tests, etc. Those are absolutely true observations -- and don't touch on the fact that tests can pass, the red/green approach can give thumbs up and rocket emojis all day long, and the code can still be shitty, brittle and riddled with security and performance flaws. And so now we have people building elaborate castles in the sky to try to catch those problems. Except that the things doing the catching are themselves prone to hallucination. And around we go.

So because a portion of (IMO always bad, but previously unrecognized as bad) coders think that these random text generators are trustworthy enough to run unsupervised, we've moved all of this chaotic energy up a level. There's more output, certainly, but it all feels like we've replaced actual intelligent thought with an army of monkeys making Rube Goldberg machines at scale. It's going to backfire.

JumpCrisscross 2 hours ago | parent | next [-]

> coders think that these random text generators are trustworthy enough to run unsupervised, we've moved all of this chaotic energy up a level

But it works well enough for most use cases. Most of what we do isn’t life or death.

timr an hour ago | parent [-]

> But it works well enough for most use cases.

So does the code produced by any bad engineer.

So either we’re finally admitting that all of that leetcode screening and engineer quality gating was a farce, or it wasn’t, and you’re wrong.

I think the answer is in the middle, but the pendulum has swung too far in the “doesn’t matter” direction.

JumpCrisscross an hour ago | parent [-]

> we’re finally admitting that all of that leetcode screening and engineer quality gating was a farce, or it wasn’t, and you’re wrong

We’re admitting a bit of both. Offshoring just became more instantaneous, secure and efficient. There will still be folks who overplay their hand.

Macroeconomically speaking, I don’t see why we need more software engineers in the future than we have today, and that’s probably a conservative estimate.

datsci_est_2015 10 minutes ago | parent [-]

> Macroeconomically speaking, I don’t see why we need more software engineers in the future than we have today, and that’s probably a conservative estimate.

Why? Is the argument that there’s a finite amount of software that the world needs, and therefore we will more quickly reach that finite amount?

Seems more likely to me that if LLMs are a force multiplier for software then more software engineers will exist. Or, instead of “software engineers”, call them “people who create software” (even with the assistance of LLMs).

Or maybe the argument is that you need to be a super genius 100x engineer in order to manipulate 17 collaborative and competitive agents in order to reach your maximum potential, and then you’ll take everyone’s jobs?

Idk just seems like wild speculation that isn’t even worth me arguing against. Too late now that I’ve already written it out I guess.

cherk3 an hour ago | parent | prev [-]

What I want to know is, what has this increase in code generation led to? What is the impact?

I don't mean 'Oh I finally have the energy to do that side project that I never could'.

Afterall, the trade-offs have to be worth something... right? Where's the 1-person billion dollar firms at That Mr Altman spoke about?

The way I think of it is code has always been an intermediary step between a vision and an object of value. So is there an increase in this activity that yields the trade-offs to be a net benefit?