▲ | anon-3988 5 days ago | |||||||||||||||||||
What prevents Git from simply working better with large files? | ||||||||||||||||||||
▲ | AceJohnny2 5 days ago | parent [-] | |||||||||||||||||||
git works just fine with large files. The problem is that when you clone a repo, or pull, by default it gets everything, including large files deep in the history that you probably don't care about anymore. That was actually an initial selling point of git: you have the full history locally. You can work from the plane/train/deserted island just fine. These large files will persist in the repo forever. So people look for options to segregate large files out so that they only get downloaded on demand (aka "lazily"). All the existing options (submodules, LFS, partial clones) are different answers to "how do we make certain files only download on demand" | ||||||||||||||||||||
|