| ▲ | HexDecOctBin 5 days ago |
| So this filter argument will reduce the repo size when cloning, but how will one reduce the repo size after a long stint of local commits of changing binary assets? Delete the repo and clone again? |
|
| ▲ | viraptor 5 days ago | parent | next [-] |
| It's really not clear which behaviour you want though. For example when you do lots of bisects you probably want to keep everything downloaded locally. If you're just working on new things, you may want to prune the old blobs. This information only exists in your head though. |
| |
| ▲ | HexDecOctBin 5 days ago | parent [-] | | The ideal behaviour is so have a filter on push too, meaning that files above a certain size should be deleted from non-latest history after push. | | |
| ▲ | viraptor 5 days ago | parent [-] | | That would prevent old revisions from working... Why would that be ideal? | | |
| ▲ | HexDecOctBin 5 days ago | parent [-] | | Why would it stop old revisions from working? What would be the difference between cloning with filter on and delete local versions from old commits? | | |
| ▲ | viraptor 5 days ago | parent | next [-] | | I thought you want to prune the destination on push. Pruning local may work for some. It would be extremely annoying for me, because I'm way more likely to dig in old commits than to push. And I really don't want the existing blobs removed without an explicit consent. | |
| ▲ | yencabulator 4 days ago | parent | prev [-] | | Cloning with a filter does stop old revisions from working offline. checkout/merge/backfill need to be done to undo the filtering. At some point, what you're asking for may become a configuration option, likely tied to git gc. It's not there yet because it's all experimental and nobody is paying developers to work on the feature. |
|
|
|
|
|
| ▲ | firesteelrain 5 days ago | parent | prev | next [-] |
| Yes once it gets bad enough your only option is to abandon and move the source code only. Your old repo has the history pre abandon. |
| |
| ▲ | bobmcnamara 5 days ago | parent [-] | | There is a trick for this, where you can setup a new repo to consider another as pre-initial-commit source of history. | | |
| ▲ | firesteelrain 5 days ago | parent [-] | | Does that cause all the binaries in LFS to come over too? | | |
| ▲ | bobmcnamara 5 days ago | parent [-] | | I can't imagine it would If you were moving both git and LFS. The old repo will still be pointed to whatever the LFS config was at that time. If that service is still up, it should continue to work. | | |
| ▲ | firesteelrain 4 days ago | parent [-] | | Ah my point is even with LFS enabled and you don’t store them external to git that the binaries are still totally part of the history (and really slow down cloning) In my case, with a 25GB repo, it was really detrimental to performance |
|
|
|
|
|
| ▲ | actinium226 5 days ago | parent | prev | next [-] |
| For lots of local edits you can squash commits using the rebase command with the interactive flag. |
|
| ▲ | reactordev 5 days ago | parent | prev [-] |
| yeah, this isn't really solving the problem. It's just punting it. While I welcome a short-circuit filter, I see dragons ahead. Dependencies. Assets. Models... won't benefit at all as these repos need the large files - hence why there are large files. |
| |
| ▲ | rezonant 5 days ago | parent | next [-] | | There seems to be a misunderstanding. The --filter option simple doesn't populate content in the .git directory which is not required for the checkout. If there is a file that is large which is needed for the current checkout (ie the parts not in the .git folder), it will be fetched regardless of the filter option. To put it another way, regardless of what max size you give to --filter, you will end up with a complete git checkout, no missing files. | |
| ▲ | actuallyalys 5 days ago | parent | prev [-] | | It’s definitely not a full solution, but it seems like it would solve cases where having the full history of the large files available, just not on everyone’s machine, is the desired behavior. |
|