Remix.run Logo
givemeethekeys 2 hours ago

A very talented junior employee that you can't trust with the keys.

GistNoesis an hour ago | parent | next [-]

The main difference is that this junior employee can't be held responsible if anything goes wrong. And the company which rented you this employee absolves itself from all responsibility too.

Here is a fresh example from today of what junior employee do when given unlimited agentic power : https://www.reddit.com/r/ClaudeAI/comments/1sv7fvc/im_a_nurs...

tossandthrow an hour ago | parent [-]

Your example is not from a Jr developer but from a free agent.

I think you will find it very hard to keep a Jr dev in a Corp responsible.

I actually think you will find that it is easier to work with agents at a higher quality and lower legal risk than using Jr developers.

And this is only going to be amplified when it becomes common knowledge that Ai poses less risk to projects, than Jr staff.

ozgrakkurt an hour ago | parent | prev | next [-]

I understand you mean this as it is close to that in terms of getting the final work.

But in my opinion, it is not even remotely close to the reliability of an educated human, communication wise.

If you gave a research task to a less experienced person, you wouldn’t expect them to convincingly lie about details.

It is useful as a review tool or boilerplate generator but it is not the same aspect you would use a human from.

ipython 2 hours ago | parent | prev | next [-]

Who do you trust with the keys? In any well run organization you have multiple layers of controls. The same concept applies here and I think the gp commenter captured it very well.

givemeethekeys an hour ago | parent | next [-]

I think you'd trust someone with the keys when they've consistently shown that they can be trusted with less critical work. If you're having to constantly monitor someone's output, then promoting them is a liability.

The same applies to an AI model.

And, since the same model would be deployed by many teams, unexpected behavior from that model even for a small subset of those teams means that it can't be promoted.

an hour ago | parent | prev [-]
[deleted]
pbronez 2 hours ago | parent | prev [-]

Yes. I think you can get agents to “Conscious competence” with a lot of well-designed oversight, direction and control. It works, but it’s fragile - nothing like the judgement needed to handle novel situations well.

https://en.wikipedia.org/wiki/Four_stages_of_competence