Remix.run Logo
SunshineTheCat 4 days ago

I know this won't be popular, however, I think the idea of differentiating a "real developer" from one who relies mostly, or even solely on an LLM is coming to an end. Right now, I fully agree relying wholly upon an LLM and failing to test it is very irresponsible.

LLMs do make mistakes. They do a sloppy job at times.

But give it a year. Two years. five years. It seems unreasonable to assume they will hit a plateau that will prevent them from being able to build, test, and ship code better than any human on earth.

I say this because it's already happened.

It was thought impossible for a computer to reach the point of being able to beat a grandmaster at chess.

There was too much "art," experience, and nuance to the game that a computer could ever fully grasp or understand. Sure there was the "math" of it all, but it lacked the human intuition that many thought were essential to winning and could only be achieved through a lifetime of practice.

Many years following Deep Blue vs. Garry Kasparov, the best players in the world laugh at the idea of even getting close to beating Stockfish or any other even mediocre game engine.

I say all of this as a 15-year developer. This happens over and over again throughout history. Something comes along to disrupt an industry or profession and people scream about how dangerous or bad it is, but it never matters in the end. Technology is undefeated.

gitaarik 3 days ago | parent | next [-]

Yes, we're already there, and the human responsibilities are shifting from engineering to architecting. The AI does the execution, the human makes the decisions. Because LLMs can never make decisions fully by themselves, because they need to be programmed by humans, otherwise they go out of sync with what we actually want.

newsoftheday 4 days ago | parent | prev | next [-]

> There was too much "art," experience, and nuance to the game that a computer could ever fully grasp or understand.

That's the thing though, AI doesn't understand, it makes us feel like it understands, but it doesn't understand anything.

simonw 4 days ago | parent [-]

Turns out that doesn't matter for chess, where the winning conditions are formally encoded.

throw1235435 3 days ago | parent | prev | next [-]

You will get downvoted but I unfortunately agree with you; also as a SWE of similar tenure. People assume there's other things to jump to and yes in the short term there may be. But the industry already has those things on its roadmap to disrupt (i.e. generate more economic useful work).

For better or worse the software career is wounded, and the AI wolves can smell blood. Its low hanging fruit that they understand and in its disruption can make a lot of money.

As an industry is dying of disruption the leftover money is made in disrupting it first, this will speed up engineering efforts to kill the profession like rats in a sinking ship. Corporate stakeholders will also be the first to spend big on anything that does; they in my experience prefer communicators, and people accountable, not people who deliver.

It was a good ride. I could never of imagined this trajectory 3 years ago.

JackSlateur 4 days ago | parent | prev | next [-]

  This happens over and over again throughout history.
Could you share a single instance of a machine that think ? Are we sharing the same timeline ?
xmodem 4 days ago | parent | prev [-]

What's your point, though? Let's assume your hypothesis and 5 years from now everyone has access to an LLM that's as good as a typical staff engineer. Is it now acceptable for a junior engineer to submit LLM-generated PRs without having tested them?

> It was thought impossible for a computer to reach the point of being able to beat a grandmaster at chess.

This is oft-cited but it takes only some cursory research to show that it has never been close to a universally-held view.

SunshineTheCat 4 days ago | parent | next [-]

In the scenario I'm hypothesizing, why would anyone need to "check" or "test" its work? What chess players are checking to make sure Stockfish made the "right" move? What determines whether or not it's "right" is if Stockfish made it.

xmodem 4 days ago | parent | next [-]

Your post sent me down a rabbit hole reading about the history of computers playing chess. Notable to me is that AI advocates were claiming that a computer would be able to beat the best human chess players within 10 years as far back as the 1950s. It was so long ago they had to clarify they were talking about digital computers.

Today I learned that AI advocates being overly optimistic about its trajectory is actually not a new phenomenon - it's been happening for more than twice my lifetime.

asadotzler 4 days ago | parent | prev [-]

There are clear win conditions in chess. There are not for most software engineering tasks. If you don't get this, it's probably a safe bet that you're not an engineer.

SunshineTheCat 4 days ago | parent [-]

Right, which is why Deep Blue won in the early 90's and now years later, AI is moving on to far more complicated tasks, like engineering software.

The fact that you gave me the "you just don't understand, you're not a chess grandmaster" emotional response helps indicate that I'm pretty much right on target with this one.

FWIW I have been engineering software for over 15 years.

throw1235435 3 days ago | parent | prev [-]

Its hard to imagine now but the code won't matter. We will have other methods of validating the product I think; like before tech. There are many ways to validate something; this is an easier problem than creation (which these AI models are somewhat solving right now)

All very demoralizing but I can see the trend. In the end all "creative" parts of the job will disappear; AI gets to do the fun stuff.

We invented something that devalues the human craft and contribution -> if you weren't skilled in that and/or saw it as a barrier you win and are excited by this (CEO types, sales/ideas people, influencers, etc). If you put the hard yards in and did the work to build hard skills and built product; you lose.

Be very clear: AI devalues intelligence and puts more value on what is still scarce (political capital, connections, nepotism, physical work, etc). It mostly destroys meritocracy.