Coping with non-free software in Debian

A personal reflection on how I moved from my Debian home to find two new homes with Trisquel and Guix for my own ethical computing, and while doing so settled my dilemma about further Debian contributions.

Debian‘s contributions to the free software community has been tremendous. Debian was one of the early distributions in the 1990’s that combined the GNU tools (compiler, linker, shell, editor, and a set of Unix tools) with the Linux kernel and published a free software operating system. Back then there were little guidance on how to publish free software binaries, let alone entire operating systems. There was a lack of established community processes and conflict resolution mechanisms, and lack of guiding principles to motivate the work. The community building efforts that came about in parallel with the technical work has resulted in a steady flow of releases over the years.

From the work of Richard Stallman and the Free Software Foundation (FSF) during the 1980’s and early 1990’s, there was at the time already an established definition of free software. Inspired by free software definition, and a belief that a social contract helps to build a community and resolve conflicts, Debian’s social contract (DSC) with the free software community was published in 1997. The DSC included the Debian Free Software Guidelines (DFSG), which directly led to the Open Source Definition.

Slackware 3.5" disks
One of my earlier Slackware install disk sets, kept for nostalgic reasons.

I was introduced to GNU/Linux through Slackware in the early 1990’s (oh boy those nights calculating XFree86 modeline’s and debugging sendmail.cf) and primarily used RedHat Linux during ca 1995-2003. I switched to Debian during the Woody release cycles, when the original RedHat Linux was abandoned and Fedora launched. It was Debian’s explicit community processes and infrastructure that attracted me. The slow nature of community processes also kept me using RedHat for so long: centralized and dogmatic decision processes often produce quick and effective outcomes, and in my opinion RedHat Linux was technically better than Debian ca 1995-2003. However the RedHat model was not sustainable, and resulted in the RedHat vs Fedora split. Debian catched up, and reached technical stability once its community processes had been grounded. I started participating in the Debian community around late 2006.

My interpretation of Debian’s social contract is that Debian should be a distribution of works licensed 100% under a free license. The Debian community has always been inclusive towards non-free software, creating the contrib/non-free section and permitting use of the bug tracker to help resolve issues with non-free works. This is all explained in the social contract. There has always been a clear boundary between free and non-free work, and there has been a commitment that the Debian system itself would be 100% free.

The concern that RedHat Linux was not 100% free software was not critical to me at the time: I primarily (and happily) ran GNU tools on Solaris, IRIX, AIX, OS/2, Windows etc. Running GNU tools on RedHat Linux was an improvement, and I hadn’t realized it was possible to get rid of all non-free software on my own primary machine. Debian realized that goal for me. I’ve been a believer in that model ever since. I can use Solaris, macOS, Android etc knowing that I have the option of using a 100% free Debian.

While the inclusive approach towards non-free software invite and deserve criticism (some argue that being inclusive to non-inclusive behavior is a bad idea), I believe that Debian’s approach was a successful survival technique: by being inclusive to – and a compromise between – free and non-free communities, Debian has been able to stay relevant and contribute to both environments. If Debian had not served and contributed to the free community, I believe free software people would have stopped contributing. If Debian had rejected non-free works completely, I don’t think the successful Ubuntu distribution would have been based on Debian.

I wrote the majority of the text above back in September 2022, intending to post it as a way to argue for my proposal to maintain the status quo within Debian. I didn’t post it because I felt I was saying the obvious, and that the obvious do not need to be repeated, and the rest of the post was just me going down memory lane.

The Debian project has been a sustainable producer of a 100% free OS up until Debian 11 bullseye. In the resolution on non-free firmware the community decided to leave the model that had resulted in a 100% free Debian for so long. The goal of Debian is no longer to publish a 100% free operating system, instead this was added: “The Debian official media may include firmware”. Indeed the Debian 12 bookworm release has confirmed that this would not only be an optional possibility. The Debian community could have published a 100% free Debian, in parallel with the non-free Debian, and still be consistent with their newly adopted policy, but chose not to. The result is that Debian’s policies are not consistent with their actions. It doesn’t make sense to claim that Debian is 100% free when the Debian installer contains non-free software. Actions speaks louder than words, so I’m left reading the policies as well-intended prose that is no longer used for guidance, but for the peace of mind for people living in ivory towers. And to attract funding, I suppose.

So how to deal with this, on a personal level? I did not have an answer to that back in October 2022 after the vote. It wasn’t clear to me that I would ever want to contribute to Debian under the new social contract that promoted non-free software. I went on vacation from any Debian work. Meanwhile Debian 12 bookworm was released, confirming my fears. I kept coming back to this text, and my only take-away was that it would be unethical for me to use Debian on my machines. Letting actions speak for themselves, I switched to PureOS on my main laptop during October, barely noticing any difference since it is based on Debian 11 bullseye. Back in December, I bought a new laptop and tried Trisquel and Guix on it, as they promise a migration path towards ppc64el that PureOS do not.

While I pondered how to approach my modest Debian contributions, I set out to learn Trisquel and gained trust in it. I migrated one Debian machine after another to Trisquel, and started to use Guix on others. Migration was easy because Trisquel is based on Ubuntu which is based on Debian. Using Guix has its challenges, but I enjoy its coherant documented environment. All of my essential self-hosted servers (VM hosts, DNS, e-mail, WWW, Nextcloud, CI/CD builders, backup etc) uses Trisquel or Guix now. I’ve migrated many GitLab CI/CD rules to use Trisquel instead of Debian, to have a more ethical computing base for software development and deployment. I wish there were official Guix docker images around.

Time has passed, and when I now think about any Debian contributions, I’m a little less muddled by my disappointment of the exclusion of a 100% free Debian. I realize that today I can use Debian in the same way that I use macOS, Android, RHEL or Ubuntu. And what prevents me from contributing to free software on those platforms? So I will make the occasional Debian contribution again, knowing that it will also indirectly improve Trisquel. To avoid having to install Debian, I need a development environment in Trisquel that allows me to build Debian packages. I have found a recipe for doing this:

# System commands:
sudo apt-get install debhelper git-buildpackage debian-archive-keyring
sudo wget -O /usr/share/debootstrap/scripts/debian-common https://sources.debian.org/data/main/d/debootstrap/1.0.128%2Bnmu2/scripts/debian-common
sudo wget -O /usr/share/debootstrap/scripts/sid https://sources.debian.org/data/main/d/debootstrap/1.0.128%2Bnmu2/scripts/sid
# Run once to create build image:
DIST=sid git-pbuilder create --mirror http://deb.debian.org/debian/ --debootstrapopts "--exclude=usr-is-merged" --basepath /var/cache/pbuilder/base-sid.cow
# Run in a directory with debian/ to build a package:
gbp buildpackage --git-pbuilder --git-dist=sid

How to sustainably deliver a 100% free software binary distributions seems like an open question, and the challenges are not all that different compared to the 1990’s or early 2000’s. I’m hoping Debian will come back to provide a 100% free platform, but my fear is that Debian will compromise even further on the free software ideals rather than the opposite. With similar arguments that were used to add the non-free firmware, Debian could compromise the free software spirit of the Linux boot process (e.g., non-free boot images signed by Debian) and media handling (e.g., web browsers and DRM), as Debian have already done with appstore-like functionality for non-free software (Python pip). To learn about other freedom issues in Debian packaging, browsing Trisquel’s helper scripts may enlight you.

Debian’s setback and the recent setback for RHEL-derived distributions are sad, and it will be a challenge for these communities to find internally consistent coherency going forward. I wish them the best of luck, as Debian and RHEL are important for the wider free software eco-system. Let’s see how the community around Trisquel, Guix and the other FSDG-distributions evolve in the future.

The situation for free software today appears better than it was years ago regardless of Debian and RHEL’s setbacks though, which is important to remember! I don’t recall being able install a 100% free OS on a modern laptop and modern server as easily as I am able to do today.

Happy Hacking!

Addendum 22 July 2023: The original title of this post was Coping with non-free Debian, and there was a thread about it that included feedback on the title. I do agree that my initial title was confrontational, and I’ve changed it to the more specific Coping with non-free software in Debian. I do appreciate all the fine free software that goes into Debian, and hope that this will continue and improve, although I have doubts given the opinions expressed by the majority of developers. For the philosophically inclined, it is interesting to think about what it means to say that a compilation of software is freely licensed. At what point does a compilation of software deserve the labels free vs non-free? Windows probably contains some software that is published as free software, let’s say Windows is 1% free. Apple authors a lot of free software (as a tangent, Apple probably produce more free software than what Debian as an organization produces), and let’s say macOS contains 20% free software. Solaris (or some still maintained derivative like OpenIndiana) is mostly freely licensed these days, isn’t it? Let’s say it is 80% free. Ubuntu and RHEL pushes that closer to let’s say 95% free software. Debian used to be 100% but is now slightly less at maybe 99%. Trisquel and Guix are at 100%. At what point is it reasonable to call a compilation free? Does Debian deserve to be called freely licensed? Does macOS? Is it even possible to use these labels for compilations in any meaningful way? All numbers just taken from thin air. It isn’t even clear how this can be measured (binary bytes? lines of code? CPU cycles? etc). The caveat about license review mistakes applies. I ignore Debian’s own claims that Debian is 100% free software, which I believe is inconsistent and no longer true under any reasonable objective analysis. It was not true before the firmware vote since Debian ships with non-free blobs in the Linux kernel for example.

How To Trust A Machine

Let’s reflect on some of my recent work that started with understanding Trisquel GNU/Linux, improving transparency into apt-archives, working on reproducible builds of Trisquel, strengthening verification of apt-archives with Sigstore, and finally thinking about security device threat models. A theme in all this is improving methods to have trust in machines, or generally any external entity. While I believe that everything starts by trusting something, usually something familiar and well-known, we need to deal with misuse of that trust that leads to failure to deliver what is desired and expected from the trusted entity. How can an entity behave to invite trust? Let’s argue for some properties that can be quantitatively measured, with a focus on computer software and hardware:

  • Deterministic Behavior – given a set of circumstances, it should behave the same.
  • Verifiability and Transparency – the method (the source code) should be accessible for understanding (compare scientific method) and its binaries verifiable, i.e., it should be possible to verify that the entity actually follows the intended deterministic method (implying efforts like reproducible builds and bootstrappable builds).
  • Accountable – the entity should behave the same for everyone, and deviation should be possible prove in a way that is hard to deny, implying efforts such as Certificate Transparency and more generic checksum logs like Sigstore and Sigsum.
  • Liberating – the tools and documentation should be available as free software to enable you to replace the trusted entity if so desired. An entity that wants to restrict you from being able to replace the trusted entity is vulnerable to corruption and may stop acting trustworthy. This point of view reinforces that open source misses the point; it has become too common to use trademark laws to restrict re-use of open source software (e.g., firefox, chrome, rust).

Essentially, this boils down to: Trust, Verify and Hold Accountable. To put this dogma in perspective, it helps to understand that this approach may be harmful to human relationships (which could explain the social awkwardness of hackers), but it remains useful as a method to improve the design of computer systems, and a useful method to evaluate safety of computer systems. When a system fails some of the criteria above, we know we have more work to do to improve it.

How far have we come on this journey? Through earlier efforts, we are in a fairly good situation. Richard Stallman through GNU/FSF made us aware of the importance of free software, the Reproducible/Bootstrappable build projects made us aware of the importance of verifiability, and Certificate Transparency highlighted the need for accountable signature logs leading to efforts like Sigstore for software. None of these efforts would have seen the light of day unless people wrote free software and packaged them into distributions that we can use, and built hardware that we can run it on. While there certainly exists more work to be done on the software side, with the recent amazing full-source build of Guix based on a 357-byte hand-written seed, I believe that we are closing that loop on the software engineering side.

So what remains? Some inspiration for further work:

  • Accountable binary software distribution remains unresolved in practice, although we have some software components around (e.g., apt-sigstore and guix git authenticate). What is missing is using them for verification by default and/or to improve the signature process to use trustworthy hardware devices, and committing the signatures to transparency logs.
  • Trustworthy hardware to run trustworthy software on remains a challenge, and we owe FSF’s Respect Your Freedom credit for raising awareness of this. Many modern devices requires non-free software to work which fails most of the criteria above and are thus inherently untrustworthy.
  • Verifying rebuilds of currently published binaries on trustworthy hardware is unresolved.
  • Completing a full-source rebuild from a small seed on trustworthy hardware remains, preferably on a platform wildly different than X86 such as Raptor’s Talos II.
  • We need improved security hardware devices and improved established practices on how to use them. For example, while Gnuk on the FST enable a trustworthy software and hardware solution, the best process for using it that I can think of generate the cryptographic keys on a more complex device. Efforts like Tillitis are inspiring here.

Onwards and upwards, happy hacking!

Update 2023-05-03: Added the “Liberating” property regarding free software, instead of having it be part of the “Verifiability and Transparency”.

How to complicate buying a laptop

I’m about to migrate to a new laptop, having done a brief pre-purchase review of options on Fosstodon and reaching a decision to buy the NovaCustom NV41. Given the rapid launch and decline of Mastodon instances, I thought I’d better summarize my process and conclusion on my self-hosted blog until the fediverse self-hosting situation improves.

Since 2010 my main portable computing device has been the Lenovo X201 that replaced the Dell Precision M65 that I bought in 2006. I have been incredibly happy with the X201, even to the point that in 2015 when I wanted to find a replacement, I couldn’t settle on a decision and eventually realized I couldn’t articulate what was wrong with the X201 and decided to just buy another X201 second-hand for my second office. There is still no deal-breaker with the X201, and I’m doing most of my computing on it including writing this post. However, today I can better articulate what is lacking with the X201 that I desire, and the state of the available options on the market has improved since my last attempt in 2015.

Briefly, my desired properties are:

  • Portable – weight under 1.5kg
  • Screen size 9-14″
  • ISO keyboard layout, preferably Swedish layout
  • Mouse trackpad, WiFi, USB and external screen connector
  • Decent market availability: I should be able to purchase it from Sweden and have consumer protection, warranty, and some hope of getting service parts for the device
  • Manufactured and sold by a vendor that is supportive of free software
  • Preferably RJ45 connector (for data center visits)
  • As little proprietary software as possible, inspired by FSF’s Respect Your Freedom
  • Able to run a free operating system

My workload for the machine is Emacs, Firefox, Nextcloud client, GNOME, Evolution (mail & calendar), LibreOffice Calc/Writer, compiling software and some podman/qemu for testing. I have used Debian as the main operating system for the entire life of this laptop, but have experimented with PureOS recently. My current X201 is useful enough for this, although support for 4K displays and a faster machine wouldn’t hurt.

Based on my experience in 2015 that led me to make no decision, I changed perspective. This is a judgement call and I will not be able to fulfil all criteria. I will have to decide on a balance and the final choice will include elements that I really dislike, but still it will hopefully be better than nothing. The conflict for me mainly center around these parts:

  • Non-free BIOS. This is software that runs on the main CPU and has full control of everything. I want this to run free software as much as possible. Coreboot is the main project in this area, although I prefer the more freedom-oriented Libreboot.
  • Proprietary and software-upgradeable parts of the main CPU. This includes CPU microcode that is not distributed as free software. The Intel Management Engine (AMD and other CPU vendors has similar technology) falls into this category as well, and is problematic because it is an entire non-free operating system running within the CPU, with many security and freedom problems. This aspect is explored in the Libreboot FAQ further. Even if these parts can be disabled (Intel ME) or not utilized (CPU microcode), I believe the mere presence of these components in the design of the CPU is a problem, and I would prefer a CPU without these properties.
  • Non-free software in other microprocessors in the laptop. Ultimately, I tend agree with the FSF’s “secondary processor” argument but when it is possible to chose between a secondary processor that runs free software and one that runs proprietary software, I would prefer as many secondary processors as possible to run free software. The libreboot binary blob reduction policy describes a move towards stronger requirements.
  • Non-free firmware that has to be loaded during runtime into CPU or secondary processors. Using Linux-libre solves this but can cause some hardware to be unusable.
  • WiFi, BlueTooth and physical network interface (NIC/RJ45). This is the most notable example of secondary processor problem with running non-free software and requiring non-free firmware. Sometimes these may even require non-free drivers, although in recent years this has usually been reduced into requiring non-free firmware.

A simple choice for me would be to buy one of the FSF RYF certified laptops. Right now that list only contains the 10+ year old Lenovo series, and I actually already have a X200 with libreboot that I bought earlier for comparison. The reason the X200 didn’t work out as a replacement for me was the lack of a mouse trackpad, concerns about non-free EC firmware, Intel ME uncertainty (is it really neutralized?) and non-free CPU microcode (what are the bugs that it fixes?), but primarily that for some reason that I can’t fully articulate it feels weird to use a laptop manufactured by Lenovo but modified by third parties to be useful. I believe in market forces to pressure manufacturers into Doing The Right Thing, and feel that there is no incentive for Lenovo to use libreboot in the future when this market niche is already fulfilled by re-sellers modifying Lenovo laptops. So I’d be happier buying a laptop from someone who is natively supportive of they way I’m computing. I’m sure this aspect could be discussed a lot more, and maybe I’ll come back to do that, and could even reconsider my thinking (the right-to-repair argument is compelling). I will definitely continue to monitor the list of RYF-certified laptops to see if future entries are more suitable options for me.

Eventually I decided to buy the NovaCustom NV41 laptop, and it arrived quickly and I’m in the process of setting it up. I hope to write a separate blog about it next.

Vikings D16 server first impressions

I have bought a 1U server to use as a virtualization platform to host my personal online services (mail, web, DNS, nextCloud, Icinga, Munin etc). This is the first time I have used a high-end libre hardware device that has been certified with the Respects Your Freedom certification, by the Free Software Foundation. To inspire others to buy a similar machine, I have written about my experience with the machine.

The machine I bought has a ASUS KGPE D16 mainboard with modified (liberated) BIOS. I bought it from Vikings.net. Ordering the server was uneventful. I ordered it with two AMD 6278 processors (the Wikipedia AMD Opteron page contains useful CPU information), 128GB of ECC RAM, and a PIKE 2008/IMR RAID controller to improve SATA speed (to be verified). I intend to use it with two 1TB Samsung 850 SSDs and two 5TB Seagate ST5000, configured in RAID1 mode. I was worried that the SATA controller(s) would not be able handle >2TB devices, which is something I have had bad experiences with older Dell RAID controllers before. The manufacturer wasn’t able to confirm that they would work, but I took the risk and went ahead with the order anyway.

One of the order configuration choices was which BIOS to use. I chose their recommended “Petitboot & Coreboot (de-blobbed)” option. The other choices were “Coreboot (de-blobbed)” and “Libreboot”. I am still learning about the BIOS alternatives, and my goal is to compare the various alternatives and eventually compile my own preferred choice. The choice of BIOS still leaves me with a desire to understand more. Petitboot appears more advanced, and has an embedded real Linux kernel and small rescue system on it (hence it requires a larger 16MB BIOS chip). Coreboot is a well known project, but it appears it does not have a strict FOSS policy so there is non-free code in it. Libreboot is a de-blobbed coreboot, and appears to fit the bill for me, but it does not appear to have a large community around it and might not be as updated as coreboot.

The PIKE2008 controll card did not fit with the 1U case that Vikings.net had found for me, so someone on their side must have had a nice day of hardware hacking. The cooler for one chip had a dent in it, which could imply damage to the chip or mainboard. The chip is close to the RAID controller where they modified the 1U case, so I was worried that some physical force had been applied there.

First impressions of using the machine for a couple of days:

  • The graphical installation of Debian 9.x stretch does not start. There is a X11 stack backtrace on booting the ISO netinst image, and I don’t know how to turn the installer into text mode from within the Petitboot boot menu.
  • Petitboot does not appear to detect a bootable system inside a RAID partition, which I have reported. I am now using a raw ext4 /boot partition on one of the SSDs to boot.
  • Debian 8.x jessie installs fine, since it uses text-mode. See my jessie installation report.
  • The graphical part of GRUB in debian 8.x makes graphics not work anymore, so I can’t see the GRUB screen or interact with the booted Debian installation.
  • Reboot time is around 2 minutes and 20 seconds between rsyslogd shuts down and until it starts again.
  • On every other boot (it is fairly stable at 50%) I get the following kernel log message every other second. The 00:14.0 device is the SBx00 SMBus Controller according to lspci, but what this means is a mystery to me.
    AMD-Vi: Event logged [IO_PAGE_FAULT device=00:14.0 domain=0x000a address=0x000000fdf9103300 flags=0x0030]
    

That’s it for now! My goal is to get Debian 9.x stretch installed on the machine and perform some heavy duty load testing of the machine before putting it into production. Expect an update if I discover something interesting!