Remix.run Logo
benchmarkist 7 months ago

Interpretability research is basically a projection of the original function implemented by the neural network onto a sub-space of "explanatory" functions that people consider to be more understandable. You're right that the words they use to sell the research is completely nonsensical because the abstract process has nothing to do with anything causal.

HeatrayEnjoyer 7 months ago | parent [-]

All code is causal.

benchmarkist 7 months ago | parent [-]

Which makes it entirely irrelevant as a descriptive term.