Remix.run Logo
bigstrat2003 19 hours ago

> Because it doesn’t have to be as accurate as a human to be a helpful tool.

I disagree. If something can't be as accurate as a (good) human, then it's useless to me. I'll just ask the human instead, because I know that the human is going to be worth listening to.

Esophagus4 18 hours ago | parent [-]

Autopilot in airplanes is a good example to disprove that.

Good in most conditions. Not as good as a human. Which is why we still have skilled pilots flying planes, assisted by autopilot.

We don’t say “it’s not as good as a human, so stuff it.”

We say, “it’s great in most conditions. And humans are trained how to leverage it effectively and trained to fly when it cannot be used.”

latchup 9 hours ago | parent | next [-]

That's a downright insane comparison. The whole problem with generative AI is how extremely unreliable it is. You cannot really trust it with anything because irrespective of its average performance, it has absolutely zero guarantees on its worst-case behavior.

Aviation autopilot systems are the complete opposite. They are arguably the most reliable computer-based systems ever created. While they cannot fly a plane alone, pilots can trust them blindly to do specific, known tasks consistently well in over 99.99999% of cases, and provide clear diagnostics in case they cannot.

If gen AI agents were this consistently good at anything, this discussion would not be happening.

sjsdaiuasgdia 17 hours ago | parent | prev [-]

The autopilots in aircraft have predictable behaviors based on the data and inputs available to them.

This can still be problematic! If sensors are feeding the autopilot bad data, the autopilot may do the wrong thing for a situation. Likewise, if the pilot(s) do not understand the autopilot's behaviors, they may misuse the autopilot, or take actions that interfere with the autopilot's operation.

Generative AI has unpredictable results. You cannot make confident statements like "if inputs X, Y, and Z are at these values, the system will always produce this set of outputs".

In the very short timeline of reacting to a critical mid-flight situation, confidence in the behavior of the systems is critical. A lot of plane crashes have "the pilot didn't understand what the automation was doing" as a significant contributing factor. We get enough of that from lack of training, differences between aircraft manufacturers, and plain old human fallibility. We don't need to introduce a randomized source of opportunities for the pilots to not understand what the automation is doing.

Esophagus4 15 hours ago | parent [-]

But now it seems like the argument has shifted.

It started out as, "AI can make more errors than a human. Therefore, it is not useful to humans." Which I disagreed with.

But now it seems like the argument is, "AI is not useful to humans because its output is non-deterministic?" Is that an accurate representation of what you're saying?

hunterpayne 13 hours ago | parent | next [-]

Because in one situation we are talking about augmentation, in the other replacement.

sjsdaiuasgdia 13 hours ago | parent | prev [-]

My problem with generative AI is that it makes different errors than humans tend to make. And these errors can be harder to predict and detect than the kinds of errors humans tend to make, because fundamentally the error source is the non-determinism.

Remember "garbage in, garbage out"? We expect technology systems to generate expected outputs in response to inputs. With generative AI, you can get a garbage output regardless of the input quality.