Remix.run Logo
mrsilencedogood 3 days ago

I think vibe coding will get good enough that things like vercel's "0 to POC" thing are going to stick around.

I think AI-powered IDE features will stick around. One notable head-and-shoulders-above-non-AI-competitor feature I've seen is "very very fuzzy search". I can ask AI "I think there's something in the code that inserts MyMessage into `my.kafka.topic`. But the gosh darn codebase is so convoluted that I literally can't find it. I suspect "my", "kafka", and "topic" all get constructed somewhere to produce that topic name because it doesn't show up in the code as a literal. I also think there's so much indirection between the producer setup and where the "event" actually first gets emitted that MyMessage might not look very much like the actual origination point. Where's the initial origin point?"

Previously, that was "ctrl-shift-F my.kafka.topic" and then ask a staff engineer and hope to God they know off-hand, and if they don't, go read the entire codebase/framework for 16 hours straight until you figure it out.

Now, LLMs have a decent shot at figuring it out.

I also think things like "is this chest Xray cancer?" are going to be hugely impactful.

But anyone expecting anything like Gen AI (being able to replace a real software engineer, or quality customer support rep, etc) is going to be disappointed.

I also think AI will generally eviscerate the bottoms of industries (expect generic gacha girl-collection games to get a lot of AI art) but also leave people valuing the tops of industries a lot more (lovingly crafted indie games, etc). So now this compute-expensive AI is targeting the already low-margin bottoms of industries. Probably not what VCs want. They want to replace software engineers, not make a slop gacha game cost 1/10th of its already low cost.

kgwgk 3 days ago | parent | next [-]

> I also think things like "is this chest Xray cancer?" are going to be hugely impactful.

Yes, but https://radiologybusiness.com/topics/artificial-intelligence...

Nine years ago, scientist Geoffrey Hinton famously said, “People should stop training radiologists now,” believing it was “completely obvious” AI would outperform human rads within five years.

3 days ago | parent | next [-]
[deleted]
borroka 3 days ago | parent | prev | next [-]

One problem is considering a solution effective only if it, at launch, completely solves the problem, for example, in the case of AI and LLMs, by coding an entire application without any human intervention, retiring radiologists, or driving autonomously in the five boroughs of New York City.

If we expect a technology to completely solve a problem as soon as it is launched, only a few in history could be considered a success. Can you imagine what it would be like if the first radios were considered a failure because you couldn't listen to music?

npilk 2 days ago | parent [-]

Agree. And then people anchor on what the technology was like when it launched, and don't notice or account for the additional improvements and iterations that happen over time.

E.g. - I was considering a 3D printer but I had heard they were expensive, messy, complicated, it was hard to get prints to come out right, etc. But it turned out I was anchored on ~2016 era technology. I got a simple modern printer for a few hundred dollars and it (mostly) just works.

HDThoreaun 3 days ago | parent | prev | next [-]

AI does outperform radiologists right now. The issues are liability and the radiologist lobby(which you linked too) throwing a fit.

Eisenstein 3 days ago | parent | prev [-]

If you want to go back in history you will find people confidently claiming things in either direction of what eventually happened.

chaboud 3 days ago | parent | prev | next [-]

I've been quite happy with thinking of agentic IDE operation as being akin to a highly energetic intern. It's prone to spiraling off into the weeds, makes silly mistakes, occasionally mangles whole repos (commit early, and often), and needs very crisp instruction and guidance. That said, I get my answers back in minutes/hours rather than days/weeks. For the cost, for things that would otherwise be delivered by an intern or college-hire SDE, it's a pretty solid value vs. paying a salary and keeping a desk available.

What it isn't, at present, is an investment in the future. I'm not making these virtual interns better coders, more thoughtful about architecture, or more autonomous in the future. Those aspects of development of new hires are vastly more valuable than the code output I'm getting in my IDE. So I'm hoping that we land in a place where we're fostering both rather than hoping that someone else is going to do the hard work of closing the agentic coding gap and growing maturity. Pulling an Indiana Jones style swap could be a really destructive move if we try to pull the human pipeline out of the system too early.

Just paying attention to near term savings runs a real risk falling into that trap.

mrsilencedogood a day ago | parent [-]

"intern or college-hire"

It's well known that these fresh employees are not going to contribute to velocity of a team for at least a year. They're investments. I've seen levelling docs specifically call this out.

"It's prone to spiraling off into the weeds, makes silly mistakes, occasionally mangles whole repos (commit early, and often), and needs very crisp instruction and guidance"

This describes a team of juniors. If it's describing an entire team, then everyone above mid-level needs to be fired.

I will say that I think "the bottom of the market getting eviscerated" is going to apply to software devs too. There is now very little point in hiring someone who already only produces slop as their best output. The main people who need to be afraid of AI in the next 5 years is probably offshore and near-shore people, and perma-juniors who have done the "1 year of experience 10 times" thing.

chaboud 9 hours ago | parent [-]

We hire folks who make slop so we can form them into folks who turn that energy into elevated output. However, I expect that the junior engineer crop will teach us a thing or two about assisting coding techniques, and we'll owe it to them to level up their system design and abstraction skills.

strange_quark 3 days ago | parent | prev [-]

Agree with this, the "find this thing in my spaghetti codebase" is far and away the best use of LLMs I've seen. Fill in the rest of this switch statement, populate this struct from this database call, etc. also work pretty well. I would love if I could get a small model that ran locally that was able to pull off those 2 tricks. Explaining code works sometimes, but even the biggest models are still prone to getting confused and/or making stuff up that isn't there. I don't like the agentic features at all and expect these to mostly die because they're expensive and, IMO, only provide the illusion of productivity.