Guix Container Images for GitLab CI/CD

I am using GitLab CI/CD pipelines for several upstream projects (libidn, libidn2, gsasl, inetutils, libtasn1, libntlm, …) and a long-time concern for these have been that there is too little testing on GNU Guix. Several attempts have been made, and earlier this year Ludo’ came really close to finish this. My earlier effort to idempotently rebuild Debian recently led me to think about re-bootstrapping Debian. Since Debian is a binary distribution, it re-use earlier binary packages when building new packages. The prospect of re-bootstrapping Debian in a reproducible way by rebuilding all of those packages going back to the beginning of time does not appeal to me. Instead, wouldn’t it be easier to build Debian trixie (or some future release of Debian) from Guix, by creating a small bootstrap sandbox that can start to build Debian packages, and then make sure that the particular Debian release can idempotently rebuild itself in a reproducible way? Then you will eventually end up with a reproducible and re-bootstrapped Debian, which pave the way for a trustworthy release of Trisquel. Fortunately, such an endeavour appears to offer many rabbit holes. Preparing Guix container images for use in GitLab pipelines is one that I jumped into in the last few days, and just came out of.

Let’s go directly to the point of this article: here is a GitLab pipeline job that runs in a native Guix container image that builds libksba after installing the libgpg-error dependency from Guix using the pre-built substitutes.

test-amd64-latest-wget-configure-make-libksba:
  image: registry.gitlab.com/debdistutils/guix/container:latest
  before_script:
  - lndir /gnu/store/*profile/etc/ /etc
  - rm -f /etc/group
  - groupadd --system guixbuild
  - for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
  - export HOME=/
  - export LANG=C.UTF-8
  - guix-daemon --disable-chroot --build-users-group=guixbuild &
  - guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
  - guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
  - guix describe
  - guix package -i libgpg-error
  - GUIX_PROFILE="//.guix-profile"
  - . "$GUIX_PROFILE/etc/profile"
  script:
  - wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
  - tar xfa libksba-1.6.7.tar.bz2
  - cd libksba-1.6.7
  - ./configure
  - make V=1
  - make check VERBOSE=t V=1

You can put that in a .gitlab-ci.yml and push it to GitLab and you will end up with a nice pipeline job output.

As you may imagine, there are several things that are sub-optimal in the before_script above that ought to be taken care of by the Guix container image, and I hope to be able to remove as much of the ugliness as possible. However that doesn’t change that these images are useful now, and I wanted to announce this work to allow others to start testing them and possibly offer help. I have started to make use of these images in some projects, see for example the libntlm commit for that.

You are welcome to join me in the Guix container images for GitLab CI/CD project! Issues and merge requests are welcome – happy hacking folks!

Towards Idempotent Rebuilds?

After rebuilding all added/modified packages in Trisquel, I have been circling around the elephant in the room: 99% of the binary packages in Trisquel comes from Ubuntu, which to a large extent are built from Debian source packages. Is it possible to rebuild the official binary packages identically? Does anyone make an effort to do so? Does anyone care about going through the differences between the official package and a rebuilt version? Reproducible-build.org‘s effort to track reproducibility bugs in Debian (and other systems) is amazing. However as far as I know, they do not confirm or deny that their rebuilds match the official packages. In fact, typically their rebuilds do not match the official packages, even when they say the package is reproducible, which had me surprised at first. To understand why that happens, compare the buildinfo file for the official coreutils 9.1-1 from Debian bookworm with the buildinfo file for reproducible-build.org’s build and you will see that the SHA256 checksum does not match, but still they declare it as a reproducible package. As far as I can tell of the situation, the purpose of their rebuilds are not to say anything about the official binary build, instead the purpose is to offer a QA service to maintainers by performing two builds of a package and declaring success if both builds match.

I have felt that something is lacking, and months have passed and I haven’t found any project that address the problem I am interested in. During my earlier work I created a project called debdistreproduce which performs rebuilds of the difference between two distributions in a GitLab pipeline, and display diffoscope output for further analysis. A couple of days ago I had the idea of rewriting it to perform rebuilds of a single distribution. A new project debdistrebuild was born and today I’m happy to bless it as version 1.0 and to announces the project! Debdistrebuild has rebuilt the top-50 popcon packages from Debian bullseye, bookworm and trixie, on amd64 and arm64, as well as Ubuntu jammy and noble on amd64, see the summary status page for links. This is intended as a proof of concept, to allow people experiment with the concept of doing GitLab-based package rebuilds and analysis. Compare how Guix has the guix challenge command.

Or I should say debdistrebuild has attempted to rebuild those distributions. The number of identically built packages are fairly low, so I didn’t want to waste resources building the rest of the archive until I understand if the differences are due to consequences of my build environment (plain apt-get build-dep followed by dpkg-buildpackage in a fresh container), or due to some real difference. Summarizing the results, debdistrebuild is able to rebuild 34% of Debian bullseye on amd64, 36% of bookworm on amd64, 32% of bookworm on arm64. The results for trixie and Ubuntu are disappointing, below 10%.

So what causes my rebuilds to be different from the official rebuilds? Some are trivial like the classical problem of varying build paths, resulting in a different NT_GNU_BUILD_ID causing a mismatch. Some are a bit strange, like a subtle difference in one of perl’s headers file. Some are due to embedded version numbers from a build dependency. Several of the build logs and diffoscope outputs doesn’t make sense, likely due to bugs in my build scripts, especially for Ubuntu which appears to strip translations and do other build variations that I don’t do. In general, the classes of reproducibility problems are the expected. Some are assembler differences for GnuPG’s gpgv-static, likely triggered by upload of a new version of gcc after the original package was built. There are at least two ways to resolve that problem: either use the same version of build dependencies that were used to produce the original build, or demand that all packages that are affected by a change in another package are rebuilt centrally until there are no more differences.

The current design of debdistrebuild uses the latest version of a build dependency that is available in the distribution. We call this a “idempotent rebuild“. This is usually not how the binary packages were built originally, they are often built against earlier versions of their build dependency. That is the situation for most binary distributions.

Instead of using the latest build dependency version, higher reproducability may be achieved by rebuilding using the same version of the build dependencies that were used during the original build. This requires parsing buildinfo files to find the right version of the build dependency to install. We believe doing so will lead to a higher number of reproducibly built packages. However it begs the question: can we rebuild that earlier version of the build dependency? This circles back to really old versions and bootstrappable builds eventually.

While rebuilding old versions would be interesting on its own, we believe that is less helpful for trusting the latest version and improving a binary distribution: it is challenging to publish a new version of some old package that would fix a reproducibility bug in another package when used as a build dependency, and then rebuild the later packages with the modified earlier version. Those earlier packages were already published, and are part of history. It may be that ultimately it will no longer be possible to rebuild some package, because proper source code is missing (for packages using build dependencies that were never part of a release); hardware to build a package could be missing; or that the source code is no longer publicly distributable.

I argue that getting to 100% idempotent rebuilds is an interesting goal on its own, and to reach it we need to start measure idempotent rebuild status.

One could conceivable imagine a way to rebuild modified versions of earlier packages, and then rebuild later packages using the modified earlier packages as build dependencies, for the purpose of achieving higher level of reproducible rebuilds of the last version, and to reach for bootstrappability. However, it may be still be that this is insufficient to achieve idempotent rebuilds of the last versions. Idempotent rebuilds are different from a reproducible build (where we try to reproduce the build using the same inputs), and also to bootstrappable builds (in which all binaries are ultimately built from source code). Consider a cycle where package X influence the content of package Y, which in turn influence the content of package X. These cycles may involve several packages, and it is conceivable that a cycle could be circular and infinite. It may be difficult to identify these chains, and even more difficult to break them up, but this effort help identify where to start looking for them. Rebuilding packages using the same build dependency versions as were used during the original build, or rebuilding packages using a bootsrappable build process, both seem orthogonal to the idempotent rebuild problem.

Our notion of rebuildability appears thus to be complementary to reproducible-builds.org’s definition and bootstrappable.org’s definition. Each to their own devices, and Happy Hacking!

Addendum about terminology: With “idempotent rebuild” I am talking about a rebuild of the entire operating system, applied to itself. Compare how you build the latest version of the GNU C Compiler: it first builds itself using whatever system compiler is available (often an earlier version of gcc) which we call step 1. Then step 2 is to build a copy of itself using the compiler built in step 1. The final step 3 is to build another copy of itself using the compiler from step 2. Debian, Ubuntu etc are at step 1 in this process right now. The output of step 2 and step 3 ought to be bit-by-bit identical, or something is wrong. The comparison between step 2 and 3 is what I refer to with an idempotent rebuild. Of course, most packages aren’t a compiler that can compile itself. However entire operating systems such as Trisquel, PureOS, Ubuntu or Debian are (hopefully) a self-contained system that ought to be able to rebuild itself to an identical copy. Or something is amiss. The reproducible build and bootstrappable build projects are about improve the quality of step 1. The property I am interested is the identical rebuild and comparison in step 2 and 3. I feel the word “idempotent” describes the property I’m interested in well, but I realize there may be better ways to describe this. Ideas welcome!

Reproducible and minimal source-only tarballs

With the release of Libntlm version 1.8 the release tarball can be reproduced on several distributions. We also publish a signed minimal source-only tarball, produced by git-archive which is the same format used by Savannah, Codeberg, GitLab, GitHub and others. Reproducibility of both tarballs are tested continuously for regressions on GitLab through a CI/CD pipeline. If that wasn’t enough to excite you, the Debian packages of Libntlm are now built from the reproducible minimal source-only tarball. The resulting binaries are reproducible on several architectures.

What does that even mean? Why should you care? How you can do the same for your project? What are the open issues? Read on, dear reader…

This article describes my practical experiments with reproducible release artifacts, following up on my earlier thoughts that lead to discussion on Fosstodon and a patch by Janneke Nieuwenhuizen to make Guix tarballs reproducible that inspired me to some practical work.

Let’s look at how a maintainer release some software, and how a user can reproduce the released artifacts from the source code. Libntlm provides a shared library written in C and uses GNU Make, GNU Autoconf, GNU Automake, GNU Libtool and gnulib for build management, but these ideas should apply to most project and build system. The following illustrate the steps a maintainer would take to prepare a release:

git clone https://gitlab.com/gsasl/libntlm.git
cd libntlm
git checkout v1.8
./bootstrap
./configure
make distcheck
gpg -b libntlm-1.8.tar.gz

The generated files libntlm-1.8.tar.gz and libntlm-1.8.tar.gz.sig are published, and users download and use them. This is how the GNU project have been doing releases since the late 1980’s. That is a testament to how successful this pattern has been! These tarballs contain source code and some generated files, typically shell scripts generated by autoconf, makefile templates generated by automake, documentation in formats like Info, HTML, or PDF. Rarely do they contain binary object code, but historically that happened.

The XZUtils incident illustrate that tarballs with files that are not included in the git archive offer an opportunity to disguise malicious backdoors. I blogged earlier how to mitigate this risk by using signed minimal source-only tarballs.

The risk of hiding malware is not the only motivation to publish signed minimal source-only tarballs. With pre-generated content in tarballs, there is a risk that GNU/Linux distributions such as Trisquel, Guix, Debian/Ubuntu or Fedora ship generated files coming from the tarball into the binary *.deb or *.rpm package file. Typically the person packaging the upstream project never realized that some installed artifacts was not re-built through a typical autoconf -fi && ./configure && make install sequence, and never wrote the code to rebuild everything. This can also happen if the build rules are written but are buggy, shipping the old artifact. When a security problem is found, this can lead to time-consuming situations, as it may be that patching the relevant source code and rebuilding the package is not sufficient: the vulnerable generated object from the tarball would be shipped into the binary package instead of a rebuilt artifact. For architecture-specific binaries this rarely happens, since object code is usually not included in tarballs — although for 10+ years I shipped the binary Java JAR file in the GNU Libidn release tarball, until I stopped shipping it. For interpreted languages and especially for generated content such as HTML, PDF, shell scripts this happens more than you would like.

Publishing minimal source-only tarballs enable easier auditing of a project’s code, to avoid the need to read through all generated files looking for malicious content. I have taken care to generate the source-only minimal tarball using git-archive. This is the same format that GitLab, GitHub etc offer for the automated download links on git tags. The minimal source-only tarballs can thus serve as a way to audit GitLab and GitHub download material! Consider if/when hosting sites like GitLab or GitHub has a security incident that cause generated tarballs to include a backdoor that is not present in the git repository. If people rely on the tag download artifact without verifying the maintainer PGP signature using GnuPG, this can lead to similar backdoor scenarios that we had for XZUtils but originated with the hosting provider instead of the release manager. This is even more concerning, since this attack can be mounted for some selected IP address that you want to target and not on everyone, thereby making it harder to discover.

With all that discussion and rationale out of the way, let’s return to the release process. I have added another step here:

make srcdist
gpg -b libntlm-1.8-src.tar.gz

Now the release is ready. I publish these four files in the Libntlm’s Savannah Download area, but they can be uploaded to a GitLab/GitHub release area as well. These are the SHA256 checksums I got after building the tarballs on my Trisquel 11 aramo laptop:

91de864224913b9493c7a6cec2890e6eded3610d34c3d983132823de348ec2ca  libntlm-1.8-src.tar.gz
ce6569a47a21173ba69c990965f73eb82d9a093eb871f935ab64ee13df47fda1  libntlm-1.8.tar.gz

So how can you reproduce my artifacts? Here is how to reproduce them in a Ubuntu 22.04 container:

podman run -it --rm ubuntu:22.04
apt-get update
apt-get install -y --no-install-recommends autoconf automake libtool make git ca-certificates
git clone https://gitlab.com/gsasl/libntlm.git
cd libntlm
git checkout v1.8
./bootstrap
./configure
make dist srcdist
sha256sum libntlm-*.tar.gz

You should see the exact same SHA256 checksum values. Hooray!

This works because Trisquel 11 and Ubuntu 22.04 uses the same version of git, autoconf, automake, and libtool. These tools do not guarantee the same output content for all versions, similar to how GNU GCC does not generate the same binary output for all versions. So there is still some delicate version pairing needed.

Ideally, the artifacts should be possible to reproduce from the release artifacts themselves, and not only directly from git. It is possible to reproduce the full tarball in a AlmaLinux 8 container – replace almalinux:8 with rockylinux:8 if you prefer RockyLinux:

podman run -it --rm almalinux:8
dnf update -y
dnf install -y make wget gcc
wget https://download.savannah.nongnu.org/releases/libntlm/libntlm-1.8.tar.gz
tar xfa libntlm-1.8.tar.gz
cd libntlm-1.8
./configure
make dist
sha256sum libntlm-1.8.tar.gz

The source-only minimal tarball can be regenerated on Debian 11:

podman run -it --rm debian:11
apt-get update
apt-get install -y --no-install-recommends make git ca-certificates
git clone https://gitlab.com/gsasl/libntlm.git
cd libntlm
git checkout v1.8
make -f cfg.mk srcdist
sha256sum libntlm-1.8-src.tar.gz 

As the Magnus Opus or chef-d’œuvre, let’s recreate the full tarball directly from the minimal source-only tarball on Trisquel 11 – replace docker.io/kpengboy/trisquel:11.0 with ubuntu:22.04 if you prefer.

podman run -it --rm docker.io/kpengboy/trisquel:11.0
apt-get update
apt-get install -y --no-install-recommends autoconf automake libtool make wget git ca-certificates
wget https://download.savannah.nongnu.org/releases/libntlm/libntlm-1.8-src.tar.gz
tar xfa libntlm-1.8-src.tar.gz
cd libntlm-v1.8
./bootstrap
./configure
make dist
sha256sum libntlm-1.8.tar.gz

Yay! You should now have great confidence in that the release artifacts correspond to what’s in version control and also to what the maintainer intended to release. Your remaining job is to audit the source code for vulnerabilities, including the source code of the dependencies used in the build. You no longer have to worry about auditing the release artifacts.

I find it somewhat amusing that the build infrastructure for Libntlm is now in a significantly better place than the code itself. Libntlm is written in old C style with plenty of string manipulation and uses broken cryptographic algorithms such as MD4 and single-DES. Remember folks: solving supply chain security issues has no bearing on what kind of code you eventually run. A clean gun can still shoot you in the foot.

Side note on naming: GitLab exports tarballs with pathnames libntlm-v1.8/ (i.e.., PROJECT-TAG/) and I’ve adopted the same pathnames, which means my libntlm-1.8-src.tar.gz tarballs are bit-by-bit identical to GitLab’s exports and you can verify this with tools like diffoscope. GitLab name the tarball libntlm-v1.8.tar.gz (i.e., PROJECT-TAG.ARCHIVE) which I find too similar to the libntlm-1.8.tar.gz that we also publish. GitHub uses the same git archive style, but unfortunately they have logic that removes the ‘v’ in the pathname so you will get a tarball with pathname libntlm-1.8/ instead of libntlm-v1.8/ that GitLab and I use. The content of the tarball is bit-by-bit identical, but the pathname and archive differs. Codeberg (running Forgejo) uses another approach: the tarball is called libntlm-v1.8.tar.gz (after the tag) just like GitLab, but the pathname inside the archive is libntlm/, otherwise the produced archive is bit-by-bit identical including timestamps. Savannah’s CGIT interface uses archive name libntlm-1.8.tar.gz with pathname libntlm-1.8/, but otherwise file content is identical. Savannah’s GitWeb interface provides snapshot links that are named after the git commit (e.g., libntlm-a812c2ca.tar.gz with libntlm-a812c2ca/) and I cannot find any tag-based download links at all. Overall, we are so close to get SHA256 checksum to match, but fail on pathname within the archive. I’ve chosen to be compatible with GitLab regarding the content of tarballs but not on archive naming. From a simplicity point of view, it would be nice if everyone used PROJECT-TAG.ARCHIVE for the archive filename and PROJECT-TAG/ for the pathname within the archive. This aspect will probably need more discussion.

Side note on git archive output: It seems different versions of git archive produce different results for the same repository. The version of git in Debian 11, Trisquel 11 and Ubuntu 22.04 behave the same. The version of git in Debian 12, AlmaLinux/RockyLinux 8/9, Alpine, ArchLinux, macOS homebrew, and upcoming Ubuntu 24.04 behave in another way. Hopefully this will not change that often, but this would invalidate reproducibility of these tarballs in the future, forcing you to use an old git release to reproduce the source-only tarball. Alas, GitLab and most other sites appears to be using modern git so the download tarballs from them would not match my tarballs – even though the content would.

Side note on ChangeLog: ChangeLog files were traditionally manually curated files with version history for a package. In recent years, several projects moved to dynamically generate them from git history (using tools like git2cl or gitlog-to-changelog). This has consequences for reproducibility of tarballs: you need to have the entire git history available! The gitlog-to-changelog tool also output different outputs depending on the time zone of the person using it, which arguable is a simple bug that can be fixed. However this entire approach is incompatible with rebuilding the full tarball from the minimal source-only tarball. It seems Libntlm’s ChangeLog file died on the surgery table here.

So how would a distribution build these minimal source-only tarballs? I happen to help on the libntlm package in Debian. It has historically used the generated tarballs as the source code to build from. This means that code coming from gnulib is vendored in the tarball. When a security problem is discovered in gnulib code, the security team needs to patch all packages that include that vendored code and rebuild them, instead of merely patching the gnulib package and rebuild all packages that rely on that particular code. To change this, the Debian libntlm package needs to Build-Depends on Debian’s gnulib package. But there was one problem: similar to most projects that use gnulib, Libntlm depend on a particular git commit of gnulib, and Debian only ship one commit. There is no coordination about which commit to use. I have adopted gnulib in Debian, and add a git bundle to the *_all.deb binary package so that projects that rely on gnulib can pick whatever commit they need. This allow an no-network GNULIB_URL and GNULIB_REVISION approach when running Libntlm’s ./bootstrap with the Debian gnulib package installed. Otherwise libntlm would pick up whatever latest version of gnulib that Debian happened to have in the gnulib package, which is not what the Libntlm maintainer intended to be used, and can lead to all sorts of version mismatches (and consequently security problems) over time. Libntlm in Debian is developed and tested on Salsa and there is continuous integration testing of it as well, thanks to the Salsa CI team.

Side note on git bundles: unfortunately there appears to be no reproducible way to export a git repository into one or more files. So one unfortunate consequence of all this work is that the gnulib *.orig.tar.gz tarball in Debian is not reproducible any more. I have tried to get Git bundles to be reproducible but I never got it to work — see my notes in gnulib’s debian/README.source on this aspect. Of course, source tarball reproducibility has nothing to do with binary reproducibility of gnulib in Debian itself, fortunately.

One open question is how to deal with the increased build dependencies that is triggered by this approach. Some people are surprised by this but I don’t see how to get around it: if you depend on source code for tools in another package to build your package, it is a bad idea to hide that dependency. We’ve done it for a long time through vendored code in non-minimal tarballs. Libntlm isn’t the most critical project from a bootstrapping perspective, so adding git and gnulib as Build-Depends to it will probably be fine. However, consider if this pattern was used for other packages that uses gnulib such as coreutils, gzip, tar, bison etc (all are using gnulib) then they would all Build-Depends on git and gnulib. Cross-building those packages for a new architecture will therefor require git on that architecture first, which gets circular quick. The dependency on gnulib is real so I don’t see that going away, and gnulib is a Architecture:all package. However, the dependency on git is merely a consequence of how the Debian gnulib package chose to make all gnulib git commits available to projects: through a git bundle. There are other ways to do this that doesn’t require the git tool to extract the necessary files, but none that I found practical — ideas welcome!

Finally some brief notes on how this was implemented. Enabling bootstrappable source-only minimal tarballs via gnulib’s ./bootstrap is achieved by using the GNULIB_REVISION mechanism, locking down the gnulib commit used. I have always disliked git submodules because they add extra steps and has complicated interaction with CI/CD. The reason why I gave up git submodules now is because the particular commit to use is not recorded in the git archive output when git submodules is used. So the particular gnulib commit has to be mentioned explicitly in some source code that goes into the git archive tarball. Colin Watson added the GNULIB_REVISION approach to ./bootstrap back in 2018, and now it no longer made sense to continue to use a gnulib git submodule. One alternative is to use ./bootstrap with --gnulib-srcdir or --gnulib-refdir if there is some practical problem with the GNULIB_URL towards a git bundle the GNULIB_REVISION in bootstrap.conf.

The srcdist make rule is simple:

git archive --prefix=libntlm-v1.8/ -o libntlm-1.8-src.tar.gz HEAD

Making the make dist generated tarball reproducible can be more complicated, however for Libntlm it was sufficient to make sure the modification times of all files were set deterministically to the timestamp of the last commit in the git repository. Interestingly there seems to be a couple of different ways to accomplish this, Guix doesn’t support minimal source-only tarballs but rely on a .tarball-timestamp file inside the tarball. Paul Eggert explained what TZDB is using some time ago. The approach I’m using now is fairly similar to the one I suggested over a year ago. If there are problems because all files in the tarball now use the same modification time, there is a solution by Bruno Haible that could be implemented.

Side note on git tags: Some people may wonder why not verify a signed git tag instead of verifying a signed tarball of the git archive. Currently most git repositories uses SHA-1 for git commit identities, but SHA-1 is not a secure hash function. While current SHA-1 attacks can be detected and mitigated, there are fundamental doubts that a git SHA-1 commit identity uniquely refers to the same content that was intended. Verifying a git tag will never offer the same assurance, since a git tag can be moved or re-signed at any time. Verifying a git commit is better but then we need to trust SHA-1. Migrating git to SHA-256 would resolve this aspect, but most hosting sites such as GitLab and GitHub does not support this yet. There are other advantages to using signed tarballs instead of signed git commits or git tags as well, e.g., tar.gz can be a deterministically reproducible persistent stable offline storage format but .git sub-directory trees or git bundles do not offer this property.

Doing continous testing of all this is critical to make sure things don’t regress. Libntlm’s pipeline definition now produce the generated libntlm-*.tar.gz tarballs and a checksum as a build artifact. Then I added the 000-reproducability job which compares the checksums and fails on mismatches. You can read its delicate output in the job for the v1.8 release. Right now we insists that builds on Trisquel 11 match Ubuntu 22.04, that PureOS 10 builds match Debian 11 builds, that AlmaLinux 8 builds match RockyLinux 8 builds, and AlmaLinux 9 builds match RockyLinux 9 builds. As you can see in pipeline job output, not all platforms lead to the same tarballs, but hopefully this state can be improved over time. There is also partial reproducibility, where the full tarball is reproducible across two distributions but not the minimal tarball, or vice versa.

If this way of working plays out well, I hope to implement it in other projects too.

What do you think? Happy Hacking!

Towards reproducible minimal source code tarballs? On *-src.tar.gz

While the work to analyze the xz backdoor is in progress, several ideas have been suggested to improve the software supply chain ecosystem. Some of those ideas are good, some of the ideas are at best irrelevant and harmless, and some suggestions are plain bad. I’d like to attempt to formalize two ideas, which have been discussed before, but the context in which they can be appreciated have not been as clear as it is today.

  1. Reproducible tarballs. The idea is that published source tarballs should be possible to reproduce independently somehow, and that this should be continuously tested and verified — preferrably as part of the upstream project continuous integration system (e.g., GitHub action or GitLab pipeline). While nominally this looks easy to achieve, there are some complex matters in this, for example: what timestamps to use for files in the tarball? I’ve brought up this aspect before.
  2. Minimal source tarballs without generated vendor files. Most GNU Autoconf/Automake-based tarballs pre-generated files which are important for bootstrapping on exotic systems that does not have the required dependencies. For the bootstrapping story to succeed, this approach is important to support. However it has become clear that this practice raise significant costs and risks. Most modern GNU/Linux distributions have all the required dependencies and actually prefers to re-build everything from source code. These pre-generated extra files introduce uncertainty to that process.

My strawman proposal to improve things is to define new tarball format *-src.tar.gz with at least the following properties:

  1. The tarball should allow users to build the project, which is the entire purpose of all this. This means that at least all source code for the project has to be included.
  2. The tarballs should be signed, for example with PGP or minisign.
  3. The tarball should be possible to reproduce bit-by-bit by a third party using upstream’s version controlled sources and a pointer to which revision was used (e.g., git tag or git commit).
  4. The tarball should not require an Internet connection to download things.
    • Corollary: every external dependency either has to be explicitly documented as such (e.g., gcc and GnuTLS), or included in the tarball.
    • Observation: This means including all *.po gettext translations which are normally downloaded when building from version controlled sources.
  5. The tarball should contain everything required to build the project from source using as much externally released versioned tooling as possible. This is the “minimal” property lacking today.
    • Corollary: This means including a vendored copy of OpenSSL or libz is not acceptable: link to them as external projects.
    • Open question: How about non-released external tooling such as gnulib or autoconf archive macros? This is a bit more delicate: most distributions either just package one current version of gnulib or autoconf archive, not previous versions. While this could change, and distributions could package the gnulib git repository (up to some current version) and the autoconf archive git repository — and packages were set up to extract the version they need (gnulib’s ./bootstrap already supports this via the –gnulib-refdir parameter), this is not normally in place.
    • Suggested Corollary: The tarball should contain content from git submodule’s such as gnulib and the necessary Autoconf archive M4 macros required by the project.
  6. Similar to how the GNU project specify the ./configure interface we need a documented interface for how to bootstrap the project. I suggest to use the already well established idiom of running ./bootstrap to set up the package to later be able to be built via ./configure. Of course, some projects are not using the autotool ./configure interface and will not follow this aspect either, but like most build systems that compete with autotools have instructions on how to build the project, they should document similar interfaces for bootstrapping the source tarball to allow building.

If tarballs that achieve the above goals were available from popular upstream projects, distributions could more easily use them instead of current tarballs that include pre-generated content. The advantage would be that the build process is not tainted by “unnecessary” files. We need to develop tools for maintainers to create these tarballs, similar to make dist that generate today’s foo-1.2.3.tar.gz files.

I think one common argument against this approach will be: Why bother with all that, and just use git-archive outputs? Or avoid the entire tarball approach and move directly towards version controlled check outs and referring to upstream releases as git URL and commit tag or id. One problem with this is that SHA-1 is broken, so placing trust in a SHA-1 identifier is simply not secure. Another counter-argument is that this optimize for packagers’ benefits at the cost of upstream maintainers: most upstream maintainers do not want to store gettext *.po translations in their source code repository. A compromise between the needs of maintainers and packagers is useful, so this *-src.tar.gz tarball approach is the indirection we need to solve that. Update: In my experiment with source-only tarballs for Libntlm I actually did use git-archive output.

What do you think?

Trisquel on ppc64el: Talos II

The release notes for Trisquel 11.0 “Aramo” mention support for POWER and ARM architectures, however the download area only contains links for x86, and forum posts suggest there is a lack of instructions how to run Trisquel on non-x86.

Since the release of Trisquel 11 I have been busy migrating x86 machines from Debian to Trisquel. One would think that I would be finished after this time period, but re-installing and migrating machines is really time consuming, especially if you allow yourself to be distracted every time you notice something that Really Ought to be improved. Rabbit holes all the way down. One of my production machines is running Debian 11 “bullseye” on a Talos II Lite machine from Raptor Computing Systems, and migrating the virtual machines running on that host (including the VM that serves this blog) to a x86 machine running Trisquel felt unsatisfying to me. I want to migrate my computing towards hardware that harmonize with FSF’s Respects Your Freedom and not away from it. Here I had to chose between using the non-free software present in newer Debian or the non-free software implied by most x86 systems: not an easy chose. So I have ignored the dilemma for some time. After all, the machine was running Debian 11 “bullseye”, which was released before Debian started to require use of non-free software. With the end-of-life date for bullseye approaching, it seems that this isn’t a sustainable choice.

There is a report open about providing ppc64el ISOs that was created by Jason Self shortly after the release, but for many months nothing happened. About a month ago, Luis Guzmán mentioned an initial ISO build and I started testing it. The setup has worked well for a month, and with this post I want to contribute instructions how to get it up and running since this is still missing.

The setup of my soon-to-be new production machine:

  • Talos II Lite
  • POWER9 18-core v2 CPU
  • Inter-Tech 4U-4410 rack case with ASPOWER power supply
  • 8x32GB DDR4-2666 ECC RDIMM
  • HighPoint SSD7505 (the Rocket 1504 or 1204 would be a more cost-effective choice, but I re-used a component I had laying around)
  • PERC H700 aka LSI MegaRAID 2108 SAS/SATA (also found laying around)
  • 2x1TB NVMe
  • 3x18TB disks

According to the notes in issue 14 the ISO image is available at https://builds.trisquel.org/debian-installer-images/ and the following commands download, integrity check and write it to a USB stick:

wget -q https://builds.trisquel.org/debian-installer-images/debian-installer-images_20210731+deb11u8+11.0trisquel14_ppc64el.tar.gz
tar xfa debian-installer-images_20210731+deb11u8+11.0trisquel14_ppc64el.tar.gz ./installer-ppc64el/20210731+deb11u8+11/images/netboot/mini.iso
echo '6df8f45fbc0e7a5fadf039e9de7fa2dc57a4d466e95d65f2eabeec80577631b7  ./installer-ppc64el/20210731+deb11u8+11/images/netboot/mini.iso' | sha256sum -c
sudo wipefs -a /dev/sdX
sudo dd if=./installer-ppc64el/20210731+deb11u8+11/images/netboot/mini.iso of=/dev/sdX conv=sync status=progress

Sadly, no hash checksums or OpenPGP signatures are published.

Power off your device, insert the USB stick, and power it up, and you see a Petitboot menu offering to boot from the USB stick. For some reason, the "Expert Install" was the default in the menu, and instead I select "Default Install" for the regular experience. For this post, I will ignore BMC/IPMI, as interacting with it is not necessary. Make sure to not connect the BMC/IPMI ethernet port unless you are willing to enter that dungeon. The VGA console works fine with a normal USB keyboard, and you can chose to use only the second enP4p1s0f1 network card in the network card selection menu.

If you are familiar with Debian netinst ISO’s, the installation is straight-forward. I complicate the setup by partitioning two RAID1 partitions on the two NVMe sticks, one RAID1 for a 75GB ext4 root filesystem (discard,noatime) and one RAID1 for a 900GB LVM volume group for virtual machines, and two 20GB swap partitions on each of the NVMe sticks (to silence a warning about lack of swap, I’m not sure swap is still a good idea?). The 3x18TB disks use DM-integrity with RAID1 however the installer does not support DM-integrity so I had to create it after the installation.

There are two additional matters worth mentioning:

  • Selecting the apt mirror does not have the list of well-known Trisquel mirrors which the x86 installer offers. Instead I have to input the archive mirror manually, and fortunately the archive.trisquel.org hostname and path values are available as defaults, so I just press enter and fix this after the installation has finished. You may want to have the hostname/path of your local mirror handy, to speed things up.
  • The installer asks me which kernel to use, which the x86 installer does not do. I believe older Trisquel/Ubuntu installers asked this question, but that it was gone in aramo on x86. I select the default “linux-image-generic” which gives me a predictable 5.15 Linux-libre kernel, although you may want to chose “linux-image-generic-hwe-11.0” for a more recent 6.2 Linux-libre kernel. Maybe this is intentional debinst-behaviour for non-x86 platforms?

I have re-installed the machine a couple of times, and have now finished installing the production setup. I haven’t ran into any serious issues, and the system has been stable. Time to wrap up, and celebrate that I now run an operating system aligned with the Free System Distribution Guidelines on hardware that aligns with Respects Your Freedom — Happy Hacking indeed!