| ▲ | sjsdaiuasgdia 17 hours ago | |||||||||||||
The autopilots in aircraft have predictable behaviors based on the data and inputs available to them. This can still be problematic! If sensors are feeding the autopilot bad data, the autopilot may do the wrong thing for a situation. Likewise, if the pilot(s) do not understand the autopilot's behaviors, they may misuse the autopilot, or take actions that interfere with the autopilot's operation. Generative AI has unpredictable results. You cannot make confident statements like "if inputs X, Y, and Z are at these values, the system will always produce this set of outputs". In the very short timeline of reacting to a critical mid-flight situation, confidence in the behavior of the systems is critical. A lot of plane crashes have "the pilot didn't understand what the automation was doing" as a significant contributing factor. We get enough of that from lack of training, differences between aircraft manufacturers, and plain old human fallibility. We don't need to introduce a randomized source of opportunities for the pilots to not understand what the automation is doing. | ||||||||||||||
| ▲ | Esophagus4 15 hours ago | parent [-] | |||||||||||||
But now it seems like the argument has shifted. It started out as, "AI can make more errors than a human. Therefore, it is not useful to humans." Which I disagreed with. But now it seems like the argument is, "AI is not useful to humans because its output is non-deterministic?" Is that an accurate representation of what you're saying? | ||||||||||||||
| ||||||||||||||