Let’s reflect on some of my recent work that started with understanding Trisquel GNU/Linux, improving transparency into apt-archives, working on reproducible builds of Trisquel, strengthening verification of apt-archives with Sigstore, and finally thinking about security device threat models. A theme in all this is improving methods to have trust in machines, or generally any external entity. While I believe that everything starts by trusting something, usually something familiar and well-known, we need to deal with misuse of that trust that leads to failure to deliver what is desired and expected from the trusted entity. How can an entity behave to invite trust? Let’s argue for some properties that can be quantitatively measured, with a focus on computer software and hardware:
- Deterministic Behavior – given a set of circumstances, it should behave the same.
- Verifiability and Transparency – the method (the source code) should be accessible for understanding (compare scientific method) and its binaries verifiable, i.e., it should be possible to verify that the entity actually follows the intended deterministic method (implying efforts like reproducible builds and bootstrappable builds).
- Accountable – the entity should behave the same for everyone, and deviation should be possible prove in a way that is hard to deny, implying efforts such as Certificate Transparency and more generic checksum logs like Sigstore and Sigsum.
- Liberating – the tools and documentation should be available as free software to enable you to replace the trusted entity if so desired. An entity that wants to restrict you from being able to replace the trusted entity is vulnerable to corruption and may stop acting trustworthy. This point of view reinforces that open source misses the point; it has become too common to use trademark laws to restrict re-use of open source software (e.g., firefox, chrome, rust).
Essentially, this boils down to: Trust, Verify and Hold Accountable. To put this dogma in perspective, it helps to understand that this approach may be harmful to human relationships (which could explain the social awkwardness of hackers), but it remains useful as a method to improve the design of computer systems, and a useful method to evaluate safety of computer systems. When a system fails some of the criteria above, we know we have more work to do to improve it.
How far have we come on this journey? Through earlier efforts, we are in a fairly good situation. Richard Stallman through GNU/FSF made us aware of the importance of free software, the Reproducible/Bootstrappable build projects made us aware of the importance of verifiability, and Certificate Transparency highlighted the need for accountable signature logs leading to efforts like Sigstore for software. None of these efforts would have seen the light of day unless people wrote free software and packaged them into distributions that we can use, and built hardware that we can run it on. While there certainly exists more work to be done on the software side, with the recent amazing full-source build of Guix based on a 357-byte hand-written seed, I believe that we are closing that loop on the software engineering side.
So what remains? Some inspiration for further work:
- Accountable binary software distribution remains unresolved in practice, although we have some software components around (e.g., apt-sigstore and
guix git authenticate
). What is missing is using them for verification by default and/or to improve the signature process to use trustworthy hardware devices, and committing the signatures to transparency logs. - Trustworthy hardware to run trustworthy software on remains a challenge, and we owe FSF’s Respect Your Freedom credit for raising awareness of this. Many modern devices requires non-free software to work which fails most of the criteria above and are thus inherently untrustworthy.
- Verifying rebuilds of currently published binaries on trustworthy hardware is unresolved.
- Completing a full-source rebuild from a small seed on trustworthy hardware remains, preferably on a platform wildly different than X86 such as Raptor’s Talos II.
- We need improved security hardware devices and improved established practices on how to use them. For example, while Gnuk on the FST enable a trustworthy software and hardware solution, the best process for using it that I can think of generate the cryptographic keys on a more complex device. Efforts like Tillitis are inspiring here.
Onwards and upwards, happy hacking!
Update 2023-05-03: Added the “Liberating” property regarding free software, instead of having it be part of the “Verifiability and Transparency”.
Probably worth listing the betrusted project https://betrusted.io/ and its currently available precursor https://www.crowdsupply.com/sutajio-kosagi/precursor hardware.
I have “issues” with some of their work, but they’re clearly trying to make progress on many of the concerns you list.
Thanks for the link, it looks quite interesting although they don’t talk about free software and it seems difficult to tell how much proprietary software is involved in creating/using a device like that. Looks good on surface though, but a lot of devices with hard requirements on blobs looks good on surface too… /Simon
My understanding of bunnie’s priorities is an emphasis on auditable and trustable source, not free licenses. Not much practical difference there; most software involved is BSD or BSD-ish.
While the toolchain thtat generates the final bitfile and ROM contents is open source (Yosys, NextPnR, rust, …), it is BIG. I don’t know how to manage that level of complexity in a trustworthy manner.