Remix.run Logo
Animats 6 hours ago

This is encouraging. The title is a bit much. "Potential points of attack for understanding what deep learning is really doing" would be more accurate but less attention-grabbing.

It might lead to understanding how to measure when a deep learning system is making stuff up or hallucinating. That would have a huge payoff. Until we get that, deep learning systems are limited to tasks where the consequences of outputting bullshit are low.

hodgehog11 6 hours ago | parent [-]

> measure when a deep learning system is making stuff up or hallucinating

That's a great problem to solve! (Maybe biased, because this is my primary research direction). One popular approach is OOD detection, but this always seemed ill-posed to me. My colleagues and I have been approaching this from a more fundamental direction using measures of model misspecification, but this is admittedly niche because it is very computationally expensive. Could still be a while before a breakthrough comes from any direction.

Animats 3 hours ago | parent [-]

> Could still be a while before a breakthrough comes from any direction.

It would be valuable enough that getting significant funding to work on it is probably possible. Especially with all the money being thrown at AI.