| ▲ | ekjhgkejhgk 5 hours ago | |
I used to be in physics but theory, not experiment. I have experience at work with decision trees in a different field. I've always thought that the idea that decision trees are "explainable" is very overstated. The moment that you go past a couple of levels in depth, it becomes an un-interpretable jungle. I've actually done the exercise of inspecting how a 15-depth decision trees makes decision, and I found it impossible to interpret anything. In a neural network you can also follow the successive matrix multiplications and relu etc through the layers, but you end up not knowing how the decision is made. Thoughts? | ||
| ▲ | lokimedes 5 hours ago | parent [-] | |
I completely agree, as you may infer from my comment. The second multivariate models are relevant we effectively trade explainability for discrimination power. If your decision tree/model needs to be large enough to warrant SGD or similar optimization techniques, it is pretty much a fantasy to ever analyze it formally. My second job after physics was AI for defense, and boy is the dream of explainable AI alive there. Honesty anyone who “needs” AI to be understandable by dissection, suffers from control issues :) | ||