▲ | cnst 5 days ago | |||||||
If you look at the history of these files, they've basically changed at most once after being committed years ago. Regenerating such static data from some master source, would be completely pointless, and would add pointless extra dependencies to the build process, and in this specific case, may likely not even be possible because of the proprietary nature of the off-topic tooling that may be required for effective management of the initial files. --- In OpenBSD, NetBSD and other systems, there's actually a whole bunch of machine-generated files that are always part of the repository. Things like build manifests (lists of all the binary files in the shipping product, e.g., distrib/sets/lists/base/mi) and pcidevs/usbdevs, are things that immediately come to mind: https://github.com/search?q=repo%3Aopenbsd%2Fsrc+sync&type=c... https://github.com/search?q=repo%3Aopenbsd%2Fsrc+regen&type=... Avoiding bison/yacc parser generators as a build dependency, is another common case for the practice. Personally, I'm a huge proponent of the practice. It allows you to reduce the complexity of the build system, increase the transparency on the history of the changes, and allows people to have a better understanding of where things are coming from, because you can directly find those things in the respective pcidevs.h / usbdevs.h, instead of wondering what is going on, and where those things are defined. It's a HUGE advantage. I never understood why so many people are horrified at the idea of small amounts of the machine-generated code being manually committed straight into the repositories. It seems like they're incorrectly applying the general rule against such practice, ignoring the specific exceptions that are certainly most beneficial under the circumstances. One of my favourite other examples is the self-documenting code. E.g., man-pages or test results. For example, maybe you use Go, and your man-pages are automatically generated based on the inline documentation within each go file itself. Committing such human-readable artefacts into the repository is a great idea if that allows everyone to immediately see what's going on with regards to the documentation, instead of having to run the code to see how it works. This increases transparency and code review efficiency, make it easier to promote the changes, because it's very clear to everyone what's going on, without having to reverse-engineer the code, or apply the patches and recompile etc. Of course, if your whole idea is to hide things from management, and increase the complexity of the system to prevent the newcomers from catching up quickly, then such practices may indeed be detrimental. | ||||||||
▲ | Y_Y 5 days ago | parent [-] | |||||||
> I never understood why so many people are horrified at the idea of small amounts of the machine-generated code being manually committed straight into the repositories. If you haven't understood maybe you could think more about it, or ask, or reduce the hyperbole until you're looking at something reasonable. The anount of code were talking about here is by no measure small, nor is it "horrifying" people. Your post reads like the kind of weird advocacy that shows up in Jira pissing matches. | ||||||||
|