| ▲ | moebrowne 5 hours ago | |||||||
This kind of thing can be mitigated by not publishing a page/download for every single branch, commit and diff in a repo. Make only the HEAD of each branch available. Anyone who wants more detail has to clone it and view it with their favourite git client. For example https://mitxela.com/projects/web-git-sum (https://git.mitxela.com/) | ||||||||
| ▲ | PaulDavisThe1st an hour ago | parent | next [-] | |||||||
Alternatively, from the nginx config file for git.ardour.org: | ||||||||
| ||||||||
| ▲ | Imustaskforhelp 5 hours ago | parent | prev [-] | |||||||
I got another interesting idea from this and another comment but what if we combine this with ssh git clients/websites with the normal ability. maybe something like https://ssheasy.com/ or similar could also be used? or maybe even a gotty/xterm instance which could automatically ssh/get a tui like interface. I feel as if this would for all scrapers be enough? | ||||||||