Remix.run Logo
rob74 2 days ago

One more point I noticed: since AI adoption is being promoted by companies, collaboration between developers could suffer. Why wait for a more experienced developer to have the time to explain some aspect of the codebase to you (and at the same time confess your ignorance), when AI can do it right away in a competent-sounding way (and most of the time it will probably be right, too)?

rogerthis 2 days ago | parent | next [-]

That already happens here. I am old dev who was the goto guy for people with certain business and technical questions. Not anymore (which is part good, as I'm interrupted much less, and part bad, as sometimes they regard the wrong answer as truth).

cadamsdotcom 2 days ago | parent [-]

You could vibe yourself up an AMA tool where people can submit questions, an agent goes to work on them, then the question and agent answer sit in a queue waiting for you to provide a review and give your weigh-in.

delecti 2 days ago | parent | next [-]

Coworkers are demonstrating that they value immediacy and possibly also some combination of embarrassment about their question or social anxiety about asking someone else, over accuracy. Not only does that still require the coworker to review the question, and also lose immediacy vs an LLM, but it might even take longer before rogerthis gets around to reviewing the queue.

djeastm a day ago | parent | prev | next [-]

I think this is what people are paid by the hour to do as AI trainers at Mercor, etc.

i_think_so 2 days ago | parent | prev | next [-]

I'm pretty sure this is the best idea I've ever heard of for this technology. You should build that tool and it should become mandatory throughout the tech world.

Can we get some enabling legislation? A UN resolution perhaps?

cadamsdotcom 2 days ago | parent [-]

Despite the snark I’ll engage.

The “get an immediate agent answer then a human expert’s fast-follow” is I think a great idea for many domains - imagine if you could get legal advice this way; the agent will have already explained the basics and the human expert just has to provide corrections - way less typing by humans.

Also, the corrections are now documented and could become future grounding for the agent.

Terr_ 2 days ago | parent | next [-]

I expect the time-limited expert will actually end up being tasked with more pain per request.

They won't just need to understand what problem the requestor has (or thinks they have) but also validate that the "immediate" feedback wasn't subtly horribly wrong.

dogleash 2 days ago | parent | prev | next [-]

> The “get an immediate agent answer then a human expert’s fast-follow” is I think a great idea for many domains

So, like what already happens when my boss asks claude something and I have to pick up the pieces. Except now it's everything he slops about the topic, not just the ones we discuss later?

i_think_so 2 days ago | parent | prev [-]

Absolutely zero snark. I'm serious. (About the serious part; obviously not the joke part.)

> a great idea for many domains

I completely agree. This is a great idea. If you don't do something with it I'm stealing it. ;-)

2 days ago | parent | prev [-]
[deleted]
cindyllm 2 days ago | parent [-]

[dead]

b112 2 days ago | parent | prev | next [-]

I think you hit the nail on the head, it's probably right, most of the time. Or, maybe 89% right, 91% of the time.

The more I use AI, the more I see mistakes. I've noticed others see these same mistakes, correct them, then when queried say "Oh, it gets it right all of the time!". No, having to point out "you got this wrong, re-write that last bit" isn't "getting it right". And it's not that the code is wrong overtly, it's subtle. Not using a function correctly, not passing something through it should (and the default happens to just work -- during testing), and more. LLMs are great at subtle bugs.

So moving forward with this isolation you mention, ensures that maybe the guy in the company, the 'answer guy' about a thing, never actually appears. Maybe, he doesn't even get to know his own code well enough to be the answer guy.

And so when an LLM writes a weird routine, instead of being able to say "No, re-write that last bit", you'll have to shrug and say "the code looks fine, right?", because you, and the answer guy, if he exists, don't know the code well enough to see the subtle mistakes.

skydhash 2 days ago | parent [-]

I’ve noticed that when I was implementing a build pipeline for a project. My changes introduced a runtime bug (I only tested that the thing was building), but then another developer broke the pipeline while fixing the runtime bug. While it was a failure of mine to introduce the runtime bug, I don’t think I can publish a fix for a bug without investigating why a bug appeared in the first place. Because code is all about assumptions and contracts, and if something that was working break, that means something else has changed and you need to be aware of it.

user34283 2 days ago | parent | prev | next [-]

In a large codebase it‘s probably next to impossible to get people who fully understand the code to explain it to you with unerring accuracy.

AI can get a pretty good picture, near instantly, whenever you need it.

It’s not just competent-sounding, it is reasonably competent, and certainly very useful for tasks like that.

homeonthemtn 2 days ago | parent | prev [-]

That's a valid point. Dev/team member isolation, not a great environment to build

reaperducer 2 days ago | parent [-]

Dev/team member isolation, not a great environment to build

Gone are the days of mandatory corporate "synergy" and after-work bar gatherings to promote "team building."

AI is showing people in the tech industry that they're just interchangeable cogs. AI is bringing the offshored Indian work environment to Silicon Valley.