Remix.run Logo
DeepSeaTortoise 6 hours ago

Why shouldn't AI be able to sufficiently model all of this in the not far future? Why shouldn't have it have sufficient access to new data and sensors to be able to collect information on its own, or at least the system that feeds it?

Not from a moral perspective of course, but the technical possibility. And the overton window has shifted already so far, the moral aspect might align soon, too.

IMO there is an entirely different problem, that's not going to go away just about ever, but could be solved right now easily. And whatever AI company does so first instantly wipes out all competition:

Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.

You know, just like the human it'd replace.

rsynnott 6 hours ago | parent | next [-]

> Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.

That's not sufficient, at least from the likes of OpenAI, because, realistically, that's a liability that would go away in bankruptcy. Companies aren't going to want to depend on it. People _might_ take, say, _Microsoft_ up on that, but Microsoft wouldn't offer it.

nicbou 4 hours ago | parent | prev [-]

> Why shouldn't AI be able to sufficiently model all of this

I call it the banana bread problem.

To curate a list of the best cafés in your city, someone must eventually go out and try a few of them. A human being with taste honed by years of sensory experiences will have to order a coffee, sit down, appreciate the vibe, and taste the banana bread.

At some point, you need someone to go out in the world and feel things. A machine that cannot feel will never be a good curator of human experiences.