Remix.run Logo
Reproducible C++ builds by logging Git hashes(jgarby.uk)
27 points by j4cobgarby 6 days ago | 29 comments
kazinator 4 minutes ago | parent | next [-]

[delayed]

danudey 4 hours ago | parent | prev | next [-]

A simpler way to do this, especially if you do tagging in your repositories, is to use `git describe`. For example:

    $ git describe --dirty
    v1.4.1-1-gde18fe90-dirty
The format is <the most recent tag>-<the number of commits since that tag>-g<the short git hash>-<dirty, but only if the repo is dirty>.

If the repo isn't dirty, then the hash you get excludes that part:

    $ git describe --dirty
    v1.4.1-1-gde18fe90
If you're using lightweight tags (the default) and not annotated tags (with messages and signatures and etc) you may want to add `--tags` because otherwise it'll skip over any lightweight tags.

The other nice thing about this is that, if the repo is not -dirty, you can use the output from `git describe` in other git commands to reference that commit:

    $ git show -s v1.4.1-1-gde18fe90
    commit de18fe907edda2f2854e9813fcfbda9df902d8f1 (HEAD -> 1.4.1-release, origin/HEAD, origin/1.4.1-release)
    Author: rockowitz <rockowitz@minsoft.com>
    Date:   Sun May 28 17:09:46 2023 -0400

        Create codacy.yml
WorldMaker 3 hours ago | parent | next [-]

`git describe` is great.

Also, if you don't feel ready to commit to tagging your repository you can start with the `--always` flag which falls back to just the short commit hash.

The article's script isn't far from `git describe --always --dirty`, which can be a good place to start, and then it gets better as you start tagging.

o11c an hour ago | parent | prev [-]

The one caveat to this is that you must perform a sufficiently-deep clone that you can actually reach the tag.

halayli 3 hours ago | parent | prev | next [-]

That barely scratches the surface when it comes to reproducible c and c++ builds. In fact the topic of reproducible builds assumes your sources are the same, as in that's really not the problem here.

You need to control every single library header version you are using outside your source like stdlibs, os headers, third party, and have a strategy to deal with rand/datetime variables that can be part of the binary.

hogehoge51 9 minutes ago | parent | next [-]

You also need to capture the version of the toolchain etc etc. Should also have a traceable link to the version of your specifications.

Just use ClearCase/ClearMake, it's been doing all of this software configuration auditing stuff for you since the 1990s.

WalterBright an hour ago | parent | prev | next [-]

Also the compiler/linker used to build it.

matrss an hour ago | parent [-]

As well as the toolchain used to compile your toolchain, through multiple levels, and all compiler flags along the path, and so on, down to some "seed" from which everything is build.

Guix' full-source bootstrap is pretty enlightening on that topic: https://guix.gnu.org/manual/devel/en/html_node/Full_002dSour...

YayaScript an hour ago | parent | prev [-]

How would you even start solving these?

syncsynchalt 40 minutes ago | parent | next [-]

Take a look at the decade+ long effort that Debian has put into this problem: https://wiki.debian.org/ReproducibleBuilds

Here's a talk from 2024: https://debconf24.debconf.org/talks/18-reproducible-builds-t...

Several distros are above the 90% mark of all packages being byte-for-byte reproducible, and one or two have hit the 99% mark.

ignoramous 5 minutes ago | parent [-]

> Several distros are above the 90% mark of all packages being byte-for-byte reproducible, and one or two have hit the 99% mark.

Simply incredible.

Explains F-Droid's recent success with Reproducible Builds (as some F-Droid maintainers are also active in the Debian scene): https://f-droid.org/en/2025/05/21/making-reproducible-builds...

matrss 32 minutes ago | parent | prev | next [-]

A good package manager, e.g. GNU Guix, let's you define a reproducible environment of all of your dependencies. This accounts for all of those external headers and shared libraries, which will be made available in an isolated build environment that only contains them and nothing else.

Eliminating nondeterminism from your builds might require some thinking, there are a number of places this can creep in (timestamps, random numbers, nondeterministic execution, ...). A good package manager can at least give you tooling to validate that you have eliminated nondeterminism (e.g. `guix build --check ...`).

Once you control the entire environment and your build is reproducible in principal, you might still encounter some fun issues, like "time traps". Guix has a great blog post about some of these issues and how they mitigate them: https://guix.gnu.org/en/blog/2024/adventures-on-the-quest-fo...

hogehoge51 7 minutes ago | parent | prev | next [-]

AFAIK ClearMake intercepted file system access and recorded the version of everything touched during your build.

MomsAVoxell an hour ago | parent | prev [-]

Virtualization, imho. Every build gets its own virtual machine, and once the build is released to the public, the VM gets cloned for continued development and the released VM gets archived.

I do this git tags thing with my projects - it helps immensely if the end user can hover over the company logo and get a tooltip with the current version, git tag and hash, and any other relevant information to the build.

Then, if I need to triage something specific, I un-archive the virtualized build environment, and everything that was there in the original build is still there.

This is a very handy method for keeping large code bases under control, and has been very effective over the years in going back to triage new bugs found, fixing them, and so on.

corysama 41 minutes ago | parent [-]

Back in the PS2 era of game development, we didn't have much of virtual machines to work with. And, making a shippable build involved wacky custom hardware that wouldn't work in a VM anyway. So, instead we had The Build Machine.

The Build Machine would be used to make The Gold Master Disc. A physical DVD that would be shipped to the publisher to be reproduced hopefully millions of times. Getting The Gold Master Disc to a shippable state would usually take weeks because it involved burning a custom disc format for each build and there was usually no way to debug other than watching what happened on the game screen.

When The Gold Master Disc was finally finalized, The Build Machine would be powered down, unplugged, labeled "This is the machine that made The Gold Master Disc for Game XYZ. DO NOT DISCARD. Do not power on without express permission from the CTO." and archived in the basement forever. Or, until the company shut down. Then, who knows what happens to it.

But, there was always a chance that the publisher or Sony would come back and request to make a change for 1.0.1 version because of some subtle issue that was found later. You don't want to take any chances starting the build process over on a different machine. You make the minimal changes possible on The Build Machine and you get The Gold Master Disc 1.0.1 out ASAP.

chuckadams 5 hours ago | parent | prev | next [-]

Give Nix a look sometime, it takes this to a whole new level by including all of the build dependencies in the hash, and their build dependencies and so on. The standard flake workflow even includes the warning about having uncommitted files.

ikety 4 hours ago | parent [-]

It's quite odd to me that Nix or something similar like Mise isn't completely ubiquitous in software. I feel like I went from having issues with build dependencies to having that aspect of software development completely solved as soon as I adopted Nix.

I absolutely can't imagine not using some kind of tool like this. Feels as vital as VCS to me now.

peterldowns 2 hours ago | parent | next [-]

Agreed. Recently started a new gig and set up Mise (previously had used nix for this) in our primary repos so that we can all share dependencies, scripts, etc. The new monorepo mode is great. Basically no one has complained and it's made everyone's lives a lot easier. Can't imagine working any other way — having the same tools everywhere is really great.

I'll also say I have absolutely 0 regrets about moving from Nix to Mise. All the common tools we want are available, it's especially easy to install tools from pip or npm and have the environments automanaged. The docs are infinity times better. And the speed of install and shell sourcing is, you guessed it, much better. Initial setup and install is also fantastically easier. I understand the ideology behind Nix, and if I were working on projects where some of our tools weren't pre-packageable or had weird conflicting runtime lib problems I'd get it, but basically everything these days has prebuilt static binaries available.

chuckadams 2 hours ago | parent [-]

Mise is pretty nice, I'd recommend it over all the other gazillion version-manager things out there, but it's not without its own weak spots: I tried mise for a php project, neither of the backends available for php had a binary for macos, and both of them failed to build it. I now use a flake.nix, along with direnv and `use flake`. The nix language definitely makes for some baffling boilerplate around the dependencies list, but devs unfamiliar with nix can ignore it and just paste in the package name from nixpkgs search.

There's also jbadeau/mise-nix that lets you use flakes in mise, but I figured at that point I may as well just use flake.nix.

peterldowns an hour ago | parent [-]

The beauty of mise is that as long as someone is hosting a precompiled binary for you, it's easy to get it. I just repro'd and yeah, `mise use php` fails for me on my machine because I don't have any dev headers. But looks like there's an easy workaround using the `ubi` downloader:

https://github.com/jdx/mise/discussions/4720#discussioncomme...

or see the first comment on this thread to see a way to explicitly specify where to find the binaries for each platform:

https://github.com/jdx/mise/discussions/4720#discussioncomme...

Having these kind of "eject" options is one of the reasons I really appreciate Mise. Not sure this would work for you but I'd rather be able to do this than have to manage/support everyone on my dev team installing and maintaining Nix.

chuckadams 3 hours ago | parent | prev | next [-]

We'd have been a lot further along if tools like make had ever adopted hashes for freshness checking rather than timestamps. We'd have ccache built in to make, make could hash entire targets, and now we're halfway to derivations. Of course that's handwaving over the tricky problem of making sure targets build reproducibly, but perhaps compiler toolchains would have taken more care to ensure it.

eptcyka an hour ago | parent | next [-]

I'd say the sad part is that nix really works well when the toolchain does caching transparently. But to deliver good DX outside of nix, you kind of want great porcelain tooling that handles everything behind the scenes - downloading of libraries, building said libraries, linking everything together. Sometimes people choose to just embed a whole programming language to make their build system work e.g. gradle. Cargo just does everything. Nix then can't really granularly build everything piece by piece when building rust crates with Cargo - you just get to rebuild every dependency any time the derivation is built and any one input changed. I wonder how much less time would've been wasted if newer languages chose to build on top of nix. Of course, nix would need to become slightly more compatible with Windows and other OSes for this to be practical.

bigfishrunning 2 hours ago | parent | prev [-]

Timestamps have the property of being easily comparable; you can always tell if one file is older then the other. If you were to use hashes for the same purpose, you'd have to keep a database of expected hashes, and comparing them would be a less trivial task, etc. It's doable, but it would be a very differently designed (and much more computationally expensive) program then make.

chuckadams 33 minutes ago | parent [-]

I bet we could get pretty far with symlinks, but then again even those were an exotic feature on some of make's supported platforms. Nowadays, may as well use sqlite.

zokier an hour ago | parent | prev [-]

I think bazel is the tool lot of people are converging towards, but turns out that maintaining complex build setups is a lot of work.

groby_b 3 hours ago | parent | prev | next [-]

This is many useful things, but it's far from a reproducible C++ build. That'd require you ensure bit-for-bit identic builds when you reproduce, and logging the repository state is just a tiny first step to get there.

https://nikhilism.com/post/2020/windows-deterministic-builds... is a good resource on some of the other steps needed. It's... a non-trivial journey :)

j4cobgarby 6 days ago | parent | prev | next [-]

Here's a short writeup of a bit of my build system for a project I'm working on. It's pretty simple, and is just a relatively clean way of recording the repository state when code was compiled, so I can reproduce results later on. Just thought the interaction between git, cmake, and C++ was a bit nice!

Scott-David 3 hours ago | parent | prev | next [-]

Logging Git hashes makes C++ builds reproducible and easy to track."

adamchol 3 hours ago | parent | prev [-]

nix fixes this

had to be said