Remix.run Logo
xmodem 4 days ago

What's your point, though? Let's assume your hypothesis and 5 years from now everyone has access to an LLM that's as good as a typical staff engineer. Is it now acceptable for a junior engineer to submit LLM-generated PRs without having tested them?

> It was thought impossible for a computer to reach the point of being able to beat a grandmaster at chess.

This is oft-cited but it takes only some cursory research to show that it has never been close to a universally-held view.

SunshineTheCat 4 days ago | parent | next [-]

In the scenario I'm hypothesizing, why would anyone need to "check" or "test" its work? What chess players are checking to make sure Stockfish made the "right" move? What determines whether or not it's "right" is if Stockfish made it.

xmodem 4 days ago | parent | next [-]

Your post sent me down a rabbit hole reading about the history of computers playing chess. Notable to me is that AI advocates were claiming that a computer would be able to beat the best human chess players within 10 years as far back as the 1950s. It was so long ago they had to clarify they were talking about digital computers.

Today I learned that AI advocates being overly optimistic about its trajectory is actually not a new phenomenon - it's been happening for more than twice my lifetime.

asadotzler 4 days ago | parent | prev [-]

There are clear win conditions in chess. There are not for most software engineering tasks. If you don't get this, it's probably a safe bet that you're not an engineer.

SunshineTheCat 4 days ago | parent [-]

Right, which is why Deep Blue won in the early 90's and now years later, AI is moving on to far more complicated tasks, like engineering software.

The fact that you gave me the "you just don't understand, you're not a chess grandmaster" emotional response helps indicate that I'm pretty much right on target with this one.

FWIW I have been engineering software for over 15 years.

throw1235435 3 days ago | parent | prev [-]

Its hard to imagine now but the code won't matter. We will have other methods of validating the product I think; like before tech. There are many ways to validate something; this is an easier problem than creation (which these AI models are somewhat solving right now)

All very demoralizing but I can see the trend. In the end all "creative" parts of the job will disappear; AI gets to do the fun stuff.

We invented something that devalues the human craft and contribution -> if you weren't skilled in that and/or saw it as a barrier you win and are excited by this (CEO types, sales/ideas people, influencers, etc). If you put the hard yards in and did the work to build hard skills and built product; you lose.

Be very clear: AI devalues intelligence and puts more value on what is still scarce (political capital, connections, nepotism, physical work, etc). It mostly destroys meritocracy.