| ▲ | jbmilgrom 8 hours ago | |||||||
> Perhaps the key to transparent/interpretable ML is to just replace the ML model with AI-coded traditional software and decision trees. This way it's still fully autonomously trained but you can easily look at the code to see what is going on. For certain problems I think thats completely right. We still are not going to want that of course for classic ML domains like vision and now coding, etc. But for those domains where software substrate is appropriate, software has a huge interpretability and operability advantage over ML | ||||||||
| ▲ | fooker 5 hours ago | parent [-] | |||||||
> We still are not going to want that of course for classic ML domains like vision It could make sense to decompose one large opaque model into code with decision trees calling out to smaller models having very specific purposes. This is more or less science fiction right now, 'mixture of experts' notwithstanding. You could potentially get a Turing award by making this work for real ;) | ||||||||
| ||||||||