Remix.run Logo
yobbo 3 days ago

> For this to be true they would have to throw away labeled training data.

That's how validation works.

jfengel 3 days ago | parent [-]

Is there a reason not to use validation data in your next round of training data? Or is it more efficient to reuse validation and instead get more training data?

parineum 3 days ago | parent [-]

You'd have to recreate your validation if you trained your model on it every iteration and then they wouldn't be consistent enough to show any trends

jfengel 2 days ago | parent [-]

I'd have thought that if you kept the same validation you'd risk over fitting.

Clearly that does make it hard to measure. I'd think you'd want "equivalent" validation (like changing the SATs every year), though I imagine that's not really a meaningful concept.