Remix.run Logo
mikert89 18 hours ago

I have 15 years of software engineering experience across some top companies. I truly believe that ai will far surpass human beings at coding, and more broadly logic work. We are very close

anonzzzies 17 hours ago | parent | next [-]

HN will be the last place to admit it; people here seem to be holding out with the vague 'I tried it and it came up with crap'. While many of us are shipping software without touching (much) code anymore. I have written code for over 40 years and this is nothing like no-code or whatever 'replacing programmers' before, this is clearly different judging from the people who cannot code with a gun to their heads but still are shipping apps: it does not really matter if anyone believes me or not. I am making more money than ever with fewer people than ever delivering more than ever.

We are very close.

(by the way; I like writing code and I still do for fun)

utopiah 16 hours ago | parent | next [-]

Both can be correct : you might be making a lot of money using the latest tools while others who work on very different problems have tried the same tools and it's just not good enough for them.

The ability to make money proves you found a good market, it doesn't prove that the new tools are useful to others.

lostmsu 4 hours ago | parent [-]

No, the comment is about "will", not "is". Of course there's no definitive proof of what will happen. But the writing is on the wall and the letters are so large now, that denying AI would take over coding if not all intellectual endeavors resembles the movie "Don't look up".

fc417fc802 17 hours ago | parent | prev [-]

> holding out with the vague 'I tried it and it came up with crap'

Isn't that a perfectly reasonable metric? The topic has been dominated by hype for at least the past 5 if not 10 years. So when you encounter the latest in a long line of "the future is here the sky is falling" claims, where every past claim to date has been wrong, it's natural to try for yourself, observe a poor result, and report back "nope, just more BS as usual".

If the hyped future does ever arrive then anyone trying for themselves will get a workable result. It will be trivially easy to demonstrate that naysayers are full of shit. That does not currently appear to be the case.

danielbln 16 hours ago | parent | next [-]

What topic are you referring to? ChatGPT release was just over 3 years ago. 5 years ago we had basic non-instruct GPT-3.

fc417fc802 16 hours ago | parent [-]

Wasn't transformer 2017? There's been constant AI hype since at least that far back and it's only gotten worse.

If I release a claim once a month that armageddon will happen next month, and then after 20 years it finally does, are all of my past claims vindicated? Or was I spewing nonsense the entire time? What if my claim was the next big pandemic? The next 9.0 earthquake?

danielbln 16 hours ago | parent [-]

Transformers was 2017 and it had some implications on translation (which were in no way overstated), but it took GPT-2 and 3 to kick it off in earnest and the real hype machine started with ChatGPT.

What you are doing however is dismissing the outrageous progress on NLP and by extension code generation of the last few years just because people over hype it.

People over hyped the Internet in the early 2000s, yet here we are.

fc417fc802 16 hours ago | parent [-]

Well I've been seeing an objectionable amount of what I consider to be hype since at least transformers.

I never dismissed the actual verifiable progress that has occurred. I objected specifically to the hype. Are you sure you're arguing with what I actually said as opposed to some position that you've imagined that I hold?

> People over hyped the Internet in the early 2000s, yet here we are.

And? Did you not read the comment you are replying to? If I make wild predictions and they eventually pan out does that vindicate me? Or was I just spewing nonsense and things happened to work out?

"LLMs will replace developers any day now" is such a claim. If it happens a month from now then you can say you were correct. If it doesn't then it was just hype and everyone forgets about it. Rinse and repeat once every few months and you have the current situation.

visarga 16 hours ago | parent | prev [-]

But the trend line is less ambiguous, models got better year over year, much much better.

fc417fc802 16 hours ago | parent [-]

I don't dispute that the situation is rapidly evolving. It is certainly possible that we could achieve AGI in the near future. It is also entirely possible that we might not. Claims such as that AGI is close or that we will soon be replacing developers entirely are pure hype.

When someone says something to the effect of "LLMs are on the verge of replacing developers any day now" it is perfectly reasonable to respond "I tried it and it came up with crap". If we were actually near that point you wouldn't have gotten crap back when you tried it for yourself.

jerkstate 9 hours ago | parent [-]

There's a big difference between "I tried it and it produced crap" and "it will replace developers entirely any day now"

People who use this stuff everyday know that people who are still saying "I tried it and it produced crap" just don't know how to use it correctly. Those developers WILL get replaced - by ones who know how to use the tool.

fc417fc802 6 hours ago | parent [-]

> Those developers WILL get replaced - by ones who know how to use the tool.

Now _that_ I would believe. But note how different "those who fail to adapt to this new tool will be replaced" is from "the vast majority will be replaced by this tool itself".

If someone had said that six (give or take) months ago I would have dismissed it as hype. But there have been at least a few decently well documented AI assisted projects done by veteran developers that have made the front page recently. Importantly they've shown clear and undeniable results as opposed to handwaving and empty aspirations. They've also been up front about the shortcomings of the new tool.

sekai 13 hours ago | parent | prev | next [-]

> I have 15 years of software engineering experience across some top companies. I truly believe that ai will far surpass human beings at coding, and more broadly logic work. We are very close

Coding was never the hard part of software development.

pelorat 12 hours ago | parent [-]

Getting the architecture mostly right, so it's easy to maintain and modify in the future is IMO hard part, but I find that this is where AI shines. I have 20 years of SWE experience (professional) and (10 hobby) and most of my AI use is for architecture and scaffolding first, code second.

daxfohl 17 hours ago | parent | prev | next [-]

They already do. What they suck at is common sense. Unfortunately good software requires both.

anonzzzies 17 hours ago | parent | next [-]

Most people also suck at common sense, including most programmers, hence most programmers do not write good software to begin with.

523-asf1 17 hours ago | parent [-]

Even a 20 year old Markov chain could produce this banality.

marktl 17 hours ago | parent | prev [-]

Or is it fortunate (for a short period at least).

523-asf1 17 hours ago | parent | prev | next [-]

Gotta make sure that the investors read this message in an Erdos thread.

AtlasBarfed 16 hours ago | parent | prev | next [-]

Is this comment written by AI?

user3939382 17 hours ago | parent | prev [-]

They can only code to specification which is where even teams of humans get lost. Without much smarter architecture for AI (LLMs as is are a joke) that needle isn’t going to move.

danielbln 12 hours ago | parent [-]

Real HN comment right here. "LLMs are a joke" - maybe don't drink the anti-hype kool aid, you'll blind yourself to the capability space that's out there, even if it's not AGI or whatever.