Remix.run Logo
benchmarkist 4 days ago

Interpretability research is basically a projection of the original function implemented by the neural network onto a sub-space of "explanatory" functions that people consider to be more understandable. You're right that the words they use to sell the research is completely nonsensical because the abstract process has nothing to do with anything causal.

HeatrayEnjoyer 4 days ago | parent [-]

All code is causal.

benchmarkist 4 days ago | parent [-]

Which makes it entirely irrelevant as a descriptive term.