Remix.run Logo
dsign 3 days ago

I have colleagues that want to plan each task of the software team for the next 12 months. They assume that such a thing is possible, or they want to do it anyway because management tells them to. The first would be an example of human fallibility, and the second would be an example of choosing the path of (perceived) least immediate self-harm after accounting for internal politics.

I doubt very much we will ever build a machine that has perfect knowledge of the future or that can solve each and every “hard” reasoning problem, or that can complete each narrow task in a way we humans like. In other words, it’s not simply a matter of beating benchmarks.

In my mind at least, AGI’s definition is simple: anything that can replace any human employee. That construct is not merely a knowledge and reasoning machine, but also something that has a stake on its own work and that can be inserted in a shared responsibility graph. It has to be able to tell that senior dev “I know planning all the tasks one year in advance is busy-work you don’t want to do, but if you don’t, management will terminate me. So, you better do it, or I’ll hack your email and show everybody your porn subscriptions.”

JSR_FDED 3 days ago | parent [-]

Interesting, I hadn’t thought about it that way. But can a thing on the other end of an API call ever truly have a “stake“?

Jensson 3 days ago | parent [-]

> But can a thing on the other end of an API call ever truly have a “stake“?

That is their goal function they are trained for, it is like dopamine and sex for humans they will do anything to get it.

JSR_FDED 3 days ago | parent [-]

Yes, but a having a stake also implies feeling the loss if it goes sideways…

Next you’re going to tell me that’s what loss functions are for :-)