Remix.run Logo
derbOac 6 days ago

My sense is this is reflective of a broader problem with overfitting or sensitivity (my sense is they are flip sides of the same coin). Ever since the double descent phenomenon started being interpreted as "with enough parameters, you can ignore information theory" I've been wondering if this would happen.

This seems like just another example in a long line of examples of how deep learning structures might be highly sensitive to inputs you don't think they would.

dandelionv1bes 5 days ago | parent | next [-]

I completely agree with this. I’m not surprised by the fine tuning examples at all, as we have a long history of seeing how we can improve an LM’s ability to take on a task via fine tuning compared to base.

I suppose it’s interesting in this example but naively, I feel like we’ve seen this behaviour overall from BERT onwards.

6 days ago | parent | prev [-]
[deleted]