| ▲ | suddenlybananas 4 hours ago | |||||||
Reminds me a fair bit of the BabyLM challenge. It would be good to give them a shout-out and see how this challenge differs. | ||||||||
| ▲ | sdpmas 4 hours ago | parent [-] | |||||||
hey, it's Samip (behind the Slowrun repo). yeah that's a fair point, we will mention them in the blog. but there are a couple of major differences: 1. our emphasis is on using more compute to get better data efficiency. this is important because there are lots of hacky chances that will get lower loss, but when compared to general methods that leverage a lot of compute, they don't do so well. and you can already see how this emphasis on compute leads to different methods to BabyLM! 2. our reasoning behind the repo is not anything to do with how much data a child sees. and our dataset is not tailored towards that either. it's simple pretraining on random subset of the internet. we know there are better training algorithms that get lower loss on that data, and we are finding those. | ||||||||
| ||||||||