| ▲ | lokimedes 5 hours ago | |
I completely agree, as you may infer from my comment. The second multivariate models are relevant we effectively trade explainability for discrimination power. If your decision tree/model needs to be large enough to warrant SGD or similar optimization techniques, it is pretty much a fantasy to ever analyze it formally. My second job after physics was AI for defense, and boy is the dream of explainable AI alive there. Honesty anyone who “needs” AI to be understandable by dissection, suffers from control issues :) | ||