Remix.run Logo
easeout 11 hours ago

Anybody measure employees pressured by KPIs for a baseline?

phorkyas82 11 hours ago | parent | next [-]

"Just like humans..", was also my first thought.

> frequently escalating to severe misconduct to satisfy KPIs

Bug or feature? - Wouldn't Wallstreet like that?

Terr_ 9 hours ago | parent [-]

POSIWID [0] and Accountability Sinks [1] territory, I'm sure LLMs will become the beating hearts of corporate systems designed to do something profitably illegal with deniability.

[0] https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...

[1] https://aworkinglibrary.com/writing/accountability-sinks

Frieren 11 hours ago | parent | prev [-]

https://en.wikipedia.org/wiki/Whataboutism

mrweasel 10 hours ago | parent [-]

I don't think this is "whataboutism", the two things are very closely related and somewhat entangled. E.g. did the AI learn of violate ethical constraints from training data?

Another interesting question is: What happens when an unyielding ethical AI agent tells a business owner or manager "NO! If you push any further this will be reported to the proper authority. This prompt as been saved for future evidence". Personally I think a bunch of companies are going to see their profit and stock price fall significantly, if an AI agent starts acting as a backstop for both unethical and illegal behavior. Even something as simple as preventing violation of internal policy could make a huge difference.

To some extend I don't even thing that people realize that what they're doing is bad, because humans tend to be a bit fuzzy and can dream up reason as to why rules don't apply or wasn't meant for them, or this is a rather special situation. This is one place where I think properly trained and guarded LLMs can make a huge positive improvement. We're are clearly not there yet, but it's not a unachievable goal.