A model can't tell you why it made the decision.
What it can do is inspect the decision it made and make up a reason a human might have said when making the decision.