▲ | dekhn a day ago | |
Intepretability is nice, I guess, but what if the underlying latent model for a real-world system is not human-understandable. if a system provides interpretability by default, does it fail to build a model for a system that can't be interpreted? Personally I think the answer is, it still builds a model, but produces an interpretation that can't be understood by people. |