Combining Dnsmasq and Unbound

For my home office network I have been using Dnsmasq for some time. Dnsmasq provides me with DNS, DHCP, DHCPv6, and IPv6 Router Advertisement. I run dnsmasq on a Debian Jessie server, but it works similar with OpenWRT if you want to use a smaller device. My entire /etc/dnsmasq.d/local configuration used to look like this:


Here dhcp-authoritative enable DHCP. interface=eth1 says to listen on eth1 only, which is my internal (IPv4 NAT) network. I try to keep track of the MAC address of all my devices in a /etc/ethers file, so I use read-ethers to have dnsmasq give stable IP addresses for them. The dhcp-range is used to enable DHCP and DHCPv6 on my internal network. The dhcp-option=option6:dns-server,[::] statement is needed to inform the DHCP clients of the DNS resolver’s IPv6 address, otherwise they would only get the IPv4 DNS server address. The enable-ra parameter enables IPv6 router advertisement on the internal network, thereby removing the need to run radvd too — useful since I prefer to use copyleft software.

Recently I had a desire to use DNSSEC, and enabled it in Dnsmasq using the following statements:


The dnssec keyword enable DNSSEC validation in dnsmasq, using the indicated trust-anchor (get the root-anchors from IANA). The dnssec-check-unsigned deserves some more discussion. The dnsmasq manpage describes it as follows:

As a default, dnsmasq does not check that unsigned DNS replies are legitimate: they are assumed to be valid and passed on (without the “authentic data” bit set, of course). This does not protect against an attacker forging unsigned replies for signed DNS zones, but it is fast. If this flag is set, dnsmasq will check the zones of unsigned replies, to ensure that unsigned replies are allowed in those zones. The cost of this is more upstream queries and slower performance.

For example, this means that dnsmasq’s DNSSEC functionality is not secure against active man-in-the-middle attacks between dnsmasq and the DNS server it is using. Even if used DNSSEC properly, an attacker could fake that it was unsigned to dnsmasq, and I would get potentially incorrect values in return. We all know that the Internet is not a secure place, and your threat model should include active attackers. I believe this mode should be the default in dnsmasq, and users should have to configure dnsmasq to not be in that mode if they really want to (with the obvious security warning).

Running with this enabled for a couple of days resulted in frustration about not being able to reach a couple of domains. The behaviour was that my clients would hang indefinitely or get a SERVFAIL, both resulting in lack of service. You can enable query logging in dnsmasq with log-queries and enabling this I noticed three distinct form of error patterns:

jow13gw dnsmasq 460 - -  forwarded to
jow13gw dnsmasq 460 - -  validation result is BOGUS
jow13gw dnsmasq 547 - -  reply is BOGUS DNSKEY
jow13gw dnsmasq 547 - -  validation result is BOGUS
jow13gw dnsmasq 547 - -  reply is BOGUS DS
jow13gw dnsmasq 547 - -  validation result is BOGUS

The first only happened intermittently, the second did not cause any noticeable problem, and the final one was reproducible. To be fair, I only found the last example after starting to search for problem reports (see post confirming bug).

At this point, I had a confirmed bug in dnsmasq that affect my use-case. I want to use official packages from Debian on this machine, so installing newer versions manually is not an option. So I started to look into alternatives for DNS resolving, and quickly found Unbound. Installing it was easy:

apt-get install unbound

I created a local configuration file in /etc/unbound/unbound.conf.d/local.conf as follows:

	interface: ::1
	interface: 2001:9b0:104:42::2
	access-control: allow
	access-control: ::1 allow
	access-control: allow
	access-control: 2001:9b0:104:42::2/64 allow
	outgoing-interface: 2001:9b0:1:1a04::2
#	log-queries: yes
#	verbosity: 2

The interface keyword determine which IP addresses to listen on, here I used the loopback interface and the local address of the physical network interface for my internal network. The access-control allows recursive DNS resolving from those networks. And outgoing-interface specify my external Internet-connected interface. log-queries and/or verbosity are useful for debugging.

To make things work, dnsmasq has to stop providing DNS services. This can be achieved with the port=0 keyword, however that will also disable informing DHCP clients about the DNS server to use. So this has to be added in manually. I ended up adding the two following lines to /etc/dnsmasq.d/local:


Restarting unbound and dnsmasq now leads to working (and secure) internal DNSSEC-aware name resolution over both IPv4 and IPv6. I can verify that resolution works, and that Unbound verify signatures and reject bad domains properly with dig as below, or use online DNSSEC resolver test page although I’m not sure how confident you can be in the result from that page.

$ host has address mail is handled by 1
$ host
;; connection timed out; no servers could be reached

I use Munin to monitor my services, and I was happy to find a nice Unbound Munin plugin. I installed the file in /usr/share/munin/plugins/ and created a Munin plugin configuration file /etc/munin/plugin-conf.d/unbound as follows:

user root
env.statefile /var/lib/munin-node/plugin-state/root/unbound.state
env.unbound_conf /etc/unbound/unbound.conf
env.unbound_control /usr/sbin/unbound-control
env.spoof_warn 1000
env.spoof_crit 100000

I run munin-node-configure --shell|sh to enable it. To work unbound has to be configured as well, so I create a /etc/unbound/unbound.conf.d/munin.conf as follows.

	extended-statistics: yes
	statistics-cumulative: no
	statistics-interval: 0
	control-enable: yes

The graphs may be viewed at my munin instance.

Cosmos – A Simple Configuration Management System

Back in early 2012 I had been helping with system administration of a number of Debian/Ubuntu-based machines, and the odd Solaris machine, for a couple of years at $DAYJOB. We had a combination of hand-written scripts, documentation notes that we cut’n’paste’d from during installation, and some locally maintained Debian packages for pulling in dependencies and providing some configuration files. As the number of people and machines involved grew, I realized that I wasn’t happy with how these machines were being administrated. If one of these machines would disappear in flames, it would take time (and more importantly, non-trivial manual labor) to get its services up and running again. I wanted a system that could automate the complete configuration of any Unix-like machine. It should require minimal human interaction. I wanted the configuration files to be version controlled. I wanted good security properties. I did not want to rely on a centralized server that would be a single point of failure. It had to be portable and be easy to get to work on new (and very old) platforms. It should be easy to modify a configuration file and get it deployed. I wanted it to be easy to start to use on an existing server. I wanted it to allow for incremental adoption. Surely this must exist, I thought.

During January 2012 I evaluated the existing configuration management systems around, like CFEngine, Chef, and Puppet. I don’t recall my reasons for rejecting each individual project, but needless to say I did not find what I was looking for. The reasons for rejecting the projects I looked at ranged from centralization concerns (single-point-of-failure central servers), bad security (no OpenPGP signing integration), to the feeling that the projects were too complex and hence fragile. I’m sure there were other reasons too.

In February I started going back to my original needs and tried to see if I could abstract something from the knowledge that was in all these notes, script snippets and local dpkg packages. I realized that the essence of what I wanted was one shell script per machine, OpenPGP signed, in a Git repository. I could check out that Git repository on every new machine that I wanted to configure, verify the OpenPGP signature of the shell script, and invoke the script. The script would do everything needed to get the machine up into an operational stage again, including package installation and configuration file changes. Since I would usually want to modify configuration files on a system even after its initial installation (hey not everyone is perfect), it was natural to extend this idea to a cron job that did ‘git pull’, verified the OpenPGP signature, and ran the script. The script would then have to be a bit more clever and not redo everything every time.

Since we had many machines, it was obvious that there would be huge code duplication between scripts. It felt natural to think of splitting up the shell script into a directory with many smaller shell scripts, and invoke each shell script in turn. Think of the /etc/init.d/ hierarchy and how it worked with System V initd. This would allow re-use of useful snippets across several machines. The next realization was that large parts of the shell script would be to create configuration files, such as /etc/network/interfaces. It would be easier to modify the content of those files if they were stored as files in a separate directory, an “overlay” stored in a sub-directory overlay/, and copied into the file system’s hierarchy with rsync. The final realization was that it made some sense to run one set of scripts before rsync’ing in the configuration files (to be able to install packages or set things up for the configuration files to make sense), and one set of scripts after the rsync (to perform tasks that require some package to be installed and configured). These set of scripts were called the “pre-tasks” and “post-tasks” respectively, and stored in sub-directories called pre-tasks.d/ and post-tasks.d/.

I started putting what would become Cosmos together during February 2012. Incidentally, I had been using etckeeper on our machines, and I had been reading its source code, and it greatly inspired the internal design of Cosmos. The git history shows well how the ideas evolved — even that Cosmos was initially called Eve but in retrospect I didn’t like the religious connotations — and there were a couple of rewrites on the way, but on the 28th of February I pushed out version 1.0. It was in total 778 lines of code, with at least 200 of those lines being the license boiler plate at the top of each file. Version 1.0 had a debian/ directory and I built the dpkg file and started to deploy on it some machines. There were a couple of small fixes in the next few days, but development stopped on March 5th 2012. We started to use Cosmos, and converted more and more machines to it, and I quickly also converted all of my home servers to use it. And even my laptops. It took until September 2014 to discover the first bug (the fix is a one-liner). Since then there haven’t been any real changes to the source code. It is in daily use today.

The README that comes with Cosmos gives a more hands-on approach on using it, which I hope will serve as a starting point if the above introduction sparked some interest. I hope to cover more about how to use Cosmos in a later blog post. Since Cosmos does so little on its own, to make sense of how to use it, you want to see a Git repository with machine models. If you want to see how the Git repository for my own machines looks you can see the sjd-cosmos repository. Don’t miss its README at the bottom. In particular, its global/ sub-directory contains some of the foundation, such as OpenPGP key trust handling.

SSH Host Certificates with YubiKey NEO

If you manage a bunch of server machines, you will undoubtedly have run into the following OpenSSH question:

The authenticity of host ' (' can't be established.
RSA key fingerprint is 1b:9b:b8:5e:74:b1:31:19:35:48:48:ba:7d:d0:01:f5.
Are you sure you want to continue connecting (yes/no)?

If the server is a single-user machine, where you are the only person expected to login on it, answering “yes” once and then using the ~/.ssh/known_hosts file to record the key fingerprint will (sort-of) work and protect you against future man-in-the-middle attacks. I say sort-of, since if you want to access the server from multiple machines, you will need to sync the known_hosts file somehow. And once your organization grows larger, and you aren’t the only person that needs to login, having a policy that everyone just answers “yes” on first connection on all their machines is bad. The risk that someone is able to successfully MITM attack you grows every time someone types “yes” to these prompts.

Setting up one (or more) SSH Certificate Authority (CA) to create SSH Host Certificates, and have your users trust this CA, will allow you and your users to automatically trust the fingerprint of the host through the indirection of the SSH Host CA. I was surprised (but probably shouldn’t have been) to find that deploying this is straightforward. Even setting this up with hardware-backed keys, stored on a YubiKey NEO, is easy. Below I will explain how to set this up for a hypothethical organization where two persons (sysadmins) are responsible for installing and configuring machines.

I’m going to assume that you already have a couple of hosts up and running and that they run the OpenSSH daemon, so they have a /etc/ssh/ssh_host_rsa_key* public/private keypair, and that you have one YubiKey NEO with the PIV applet and that the NEO is in CCID mode. I don’t believe it matters, but I’m running a combination of Debian and Ubuntu machines. The Yubico PIV tool is used to configure the YubiKey NEO, and I will be using OpenSC‘s PKCS#11 library to connect OpenSSH with the YubiKey NEO. Let’s install some tools:

apt-get install yubikey-personalization yubico-piv-tool opensc-pkcs11 pcscd

Every person responsible for signing SSH Host Certificates in your organization needs a YubiKey NEO. For my example, there will only be two persons, but the number could be larger. Each one of them will have to go through the following process.

The first step is to prepare the NEO. First mode switch it to CCID using some device configuration tool, like yubikey-personalization.

ykpersonalize -m1

Then prepare the PIV applet in the YubiKey NEO. This is covered by the YubiKey NEO PIV Introduction but I’ll reproduce the commands below. Do this on a disconnected machine, saving all files generated on one or more secure media and store that in a safe.

key=`dd if=/dev/random bs=1 count=24 2>/dev/null | hexdump -v -e '/1 "%02X"'`
echo $key > ssh-$user-key.txt
pin=`dd if=/dev/random bs=1 count=6 2>/dev/null | hexdump -v -e '/1 "%u"'|cut -c1-6`
echo $pin > ssh-$user-pin.txt
puk=`dd if=/dev/random bs=1 count=6 2>/dev/null | hexdump -v -e '/1 "%u"'|cut -c1-8`
echo $puk > ssh-$user-puk.txt

yubico-piv-tool -a set-mgm-key -n $key
yubico-piv-tool -k $key -a change-pin -P 123456 -N $pin
yubico-piv-tool -k $key -a change-puk -P 12345678 -N $puk

Then generate a RSA private key for the SSH Host CA, and generate a dummy X.509 certificate for that key. The only use for the X.509 certificate is to make PIV/PKCS#11 happy — they want to be able to extract the public-key from the smartcard, and do that through the X.509 certificate.

openssl genrsa -out ssh-$user-ca-key.pem 2048
openssl req -new -x509 -batch -key ssh-$user-ca-key.pem -out ssh-$user-ca-crt.pem

You import the key and certificate to the PIV applet as follows:

yubico-piv-tool -k $key -a import-key -s 9c < ssh-$user-ca-key.pem
yubico-piv-tool -k $key -a import-certificate -s 9c < ssh-$user-ca-crt.pem

You now have a SSH Host CA ready to go! The first thing you want to do is to extract the public-key for the CA, and you use OpenSSH's ssh-keygen for this, specifying OpenSC's PKCS#11 module.

ssh-keygen -D /usr/lib/x86_64-linux-gnu/ -e > ssh-$

If you happen to use YubiKey NEO with OpenPGP using gpg-agent/scdaemon, you may get the following error message:

no slots
cannot read public key from pkcs11

The reason is that scdaemon exclusively locks the smartcard, so no other application can access it. You need to kill scdaemon, which can be done as follows:

gpg-connect-agent SCD KILLSCD SCD BYE /bye

The output from ssh-keygen may look like this:

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCp+gbwBHova/OnWMj99A6HbeMAGE7eP3S9lKm4/fk86Qd9bzzNNz2TKHM7V1IMEj0GxeiagDC9FMVIcbg5OaSDkuT0wGzLAJWgY2Fn3AksgA6cjA3fYQCKw0Kq4/ySFX+Zb+A8zhJgCkMWT0ZB0ZEWi4zFbG4D/q6IvCAZBtdRKkj8nJtT5l3D3TGPXCWa2A2pptGVDgs+0FYbHX0ynD0KfB4PmtR4fVQyGJjJ0MbF7fXFzQVcWiBtui8WR/Np9tvYLUJHkAXY/FjLOZf9ye0jLgP1yE10+ihe7BCxkM79GU9BsyRgRt3oArawUuU6tLgkaMN8kZPKAdq0wxNauFtH

Now all your users in your organization needs to add a line to their ~/.ssh/known_hosts as follows:

@cert-authority * ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCp+gbwBHova/OnWMj99A6HbeMAGE7eP3S9lKm4/fk86Qd9bzzNNz2TKHM7V1IMEj0GxeiagDC9FMVIcbg5OaSDkuT0wGzLAJWgY2Fn3AksgA6cjA3fYQCKw0Kq4/ySFX+Zb+A8zhJgCkMWT0ZB0ZEWi4zFbG4D/q6IvCAZBtdRKkj8nJtT5l3D3TGPXCWa2A2pptGVDgs+0FYbHX0ynD0KfB4PmtR4fVQyGJjJ0MbF7fXFzQVcWiBtui8WR/Np9tvYLUJHkAXY/FjLOZf9ye0jLgP1yE10+ihe7BCxkM79GU9BsyRgRt3oArawUuU6tLgkaMN8kZPKAdq0wxNauFtH

Each sysadmin needs to go through this process, and each user needs to add one line for each sysadmin. While you could put the same key/certificate on multiple YubiKey NEOs, to allow users to only have to put one line into their file, dealing with revocation becomes a bit more complicated if you do that. If you have multiple CA keys in use at the same time, you can roll over to new CA keys without disturbing production. Users may also have different policies for different machines, so that not all sysadmins have the power to create host keys for all machines in your organization.

The CA setup is now complete, however it isn't doing anything on its own. We need to sign some host keys using the CA, and to configure the hosts' sshd to use them. What you could do is something like this, for every host that you want to create keys for:
scp root@$h:/etc/ssh/ .
gpg-connect-agent "SCD KILLSCD" "SCD BYE" /bye
ssh-keygen -D /usr/lib/x86_64-linux-gnu/ -s ssh-$ -I $h -h -n $h -V +52w
scp root@$h:/etc/ssh/

The ssh-keygen command will use OpenSC's PKCS#11 library to talk to the PIV applet on the NEO, and it will prompt you for the PIN. Enter the PIN that you set above. The output of the command would be something like this:

Enter PIN for 'PIV_II (PIV Card Holder pin)': 
Signed host key id "" serial 0 for valid from 2015-06-16T13:39:00 to 2016-06-14T13:40:58

The host now has a SSH Host Certificate installed. To use it, you must make sure that /etc/ssh/sshd_config has the following line:

HostCertificate /etc/ssh/

You need to restart sshd to apply the configuration change. If you now try to connect to the host, you will likely still use the known_hosts fingerprint approach. So remove the fingerprint from your machine:

ssh-keygen -R $h

Now if you attempt to ssh to the host, and using the -v parameter to ssh, you will see the following:

debug1: Server host key: RSA-CERT 1b:9b:b8:5e:74:b1:31:19:35:48:48:ba:7d:d0:01:f5
debug1: Host '' is known and matches the RSA-CERT host certificate.


One aspect that may warrant further discussion is the host keys. Here I only created host certificates for the hosts' RSA key. You could create host certificate for the DSA, ECDSA and Ed25519 keys as well. The reason I did not do that was that in this organization, we all used GnuPG's gpg-agent/scdaemon with YubiKey NEO's OpenPGP Card Applet with RSA keys for user authentication. So only the host RSA key is relevant.

Revocation of a YubiKey NEO key is implemented by asking users to drop the corresponding line for one of the sysadmins, and regenerate the host certificate for the hosts that the sysadmin had created host certificates for. This is one reason users should have at least two CAs for your organization that they trust for signing host certificates, so they can migrate away from one of them to the other without interrupting operations.

Laptop decision fatigue

I admit defeat. I have made some effort into researching recent laptop models (see first and second post). Last week I asked myself what the biggest problem with my current 4+ year old X201 is. I couldn’t articulate any significant concern. So I have bought another second-hand X201 for semi-permanent use at my second office. At ~225 USD/EUR, including another docking station, it is an amazing value. I considered the X220-X240 but they have a different docking station, and were roughly twice the price — the latter allowed for a Samsung 850 PRO SSD purchase. Thanks everyone for your advice, anyway!

OpenPGP Smartcards and GNOME

The combination of GnuPG and a OpenPGP smartcard has been implemented and working for almost a decade. I recall starting to use it when I received a FSFE Fellowship card in 2006. Today I’m using a YubiKey NEO. Sadly there has been some regressions when using them under GNOME recently. I reinstalled my laptop with Debian Jessie (beta2) recently, and now took the time to work through the issue and write down a workaround.

To work with GnuPG and smartcards you install GnuPG agent, scdaemon, pscsd and pcsc-tools. On Debian you can do it like this:

apt-get install gnupg-agent scdaemon pcscd pcsc-tools

Use the pcsc_scan command line tool to make sure pcscd recognize the smartcard before continuing, if that doesn’t recognize the smartcard nothing beyond this point will work. The next step is to make sure you have the following line in ~/.gnupg/gpg.conf:


Logging out and into GNOME should start gpg-agent for you, through the /etc/X11/Xsession.d/90gpg-agent script. In theory, this should be all that is required. However, when you start a terminal and attempt to use the smartcard through GnuPG you would get an error like this:

jas@latte:~$ gpg --card-status
gpg: selecting openpgp failed: unknown command
gpg: OpenPGP card not available: general error

The reason is that the GNOME Keyring hijacks the GnuPG agent’s environment variables and effectively replaces gpg-agent with gnome-keyring-daemon which does not support smartcard commands (Debian bug #773304). GnuPG uses the environment variable GPG_AGENT_INFO to find the location of the agent socket, and when the GNOME Keyring is active it will typically look like this:

jas@latte:~$ echo $GPG_AGENT_INFO 

If you use GnuPG with a smartcard, I recommend to disable GNOME Keyring’s GnuPG and SSH agent emulation code. This used to be easy to achieve in older GNOME releases (e.g., the one included in Debian Wheezy), through the gnome-session-properties GUI. Sadly there is no longer any GUI for disabling this functionality (Debian bug #760102). The GNOME Keyring GnuPG/SSH agent replacement functionality is invoked through the XDG autostart mechanism, and the documented way to disable system-wide services for a normal user account is to invoke the following commands.

jas@latte:~$ mkdir ~/.config/autostart
jas@latte:~$ cp /etc/xdg/autostart/gnome-keyring-gpg.desktop ~/.config/autostart/
jas@latte:~$ echo 'Hidden=true' >> ~/.config/autostart/gnome-keyring-gpg.desktop 
jas@latte:~$ cp /etc/xdg/autostart/gnome-keyring-ssh.desktop ~/.config/autostart/
jas@latte:~$ echo 'Hidden=true' >> ~/.config/autostart/gnome-keyring-ssh.desktop 

You now need to logout and login again. When you start a terminal, you can look at the GPG_AGENT_INFO environment variable again and everything should be working again.

jas@latte:~$ echo $GPG_AGENT_INFO 
jas@latte:~$ echo $SSH_AUTH_SOCK 
jas@latte:~$ gpg --card-status
Application ID ...: D2760001240102000060000000420000
jas@latte:~$ ssh-add -L
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFP+UOTZJ+OXydpmbKmdGOVoJJz8se7lMs139T+TNLryk3EEWF+GqbB4VgzxzrGjwAMSjeQkAMb7Sbn+VpbJf1JDPFBHoYJQmg6CX4kFRaGZT6DHbYjgia59WkdkEYTtB7KPkbFWleo/RZT2u3f8eTedrP7dhSX0azN0lDuu/wBrwedzSV+AiPr10rQaCTp1V8sKbhz5ryOXHQW0Gcps6JraRzMW+ooKFX3lPq0pZa7qL9F6sE4sDFvtOdbRJoZS1b88aZrENGx8KSrcMzARq9UBn1plsEG4/3BRv/BgHHaF+d97by52R0VVyIXpLlkdp1Uk4D9cQptgaH4UAyI1vr cardno:006000000042

That’s it. Resolving this properly involves 1) adding smartcard code to the GNOME Keyring, 2) disabling the GnuPG/SSH replacement code in GNOME Keyring completely, 3) reorder the startup so that gpg-agent supersedes gnome-keyring-daemon instead of vice versa, so that people who installed the gpg-agent really gets it instead of the GNOME default, or 4) something else. I don’t have a strong opinion on how to solve this, but 3) sounds like a simple way forward.

Offline GnuPG Master Key and Subkeys on YubiKey NEO Smartcard

I have moved to a new OpenPGP key. There are many tutorials and blog posts on GnuPG key generation around, but none of them matched exactly the setup I wanted to have. So I wrote down the steps I took, to remember them if I need to in the future. Briefly my requirements were as follows:

  • The new master GnuPG key is on an USB stick.
  • The USB stick is only ever used on an offline computer.
  • There are subkeys stored on a YubiKey NEO smartcard for daily use.
  • I want to generate the subkeys using GnuPG so I have a backup.
  • Some non-default hash/cipher preferences encoded into the public key.

Continue reading Offline GnuPG Master Key and Subkeys on YubiKey NEO Smartcard

OpenPGP Key Transition Statement

I have created a new OpenPGP key 54265e8c and will be transitioning away from my old key. If you have signed my old key, I would appreciate signatures on my new key as well. I have created a transition statement that can be downloaded from

Below is the signed statement.

Hash: SHA512

OpenPGP Key Transition Statement for Simon Josefsson

I have created a new OpenPGP key and will be transitioning away from
my old key.  The old key has not been compromised and will continue to
be valid for some time, but I prefer all future correspondence to be
encrypted to the new key, and will be making signatures with the new
key going forward.

I would like this new key to be re-integrated into the web of trust.
This message is signed by both keys to certify the transition.  My new
and old keys are signed by each other.  If you have signed my old key,
I would appreciate signatures on my new key as well, provided that
your signing policy permits that without re-authenticating me.

The old key, which I am transitioning away from, is:

pub   1280R/B565716F 2002-05-05
      Key fingerprint = 0424 D4EE 81A0 E3D1 19C6  F835 EDA2 1E94 B565 716F

The new key, to which I am transitioning, is:

pub   3744R/54265E8C 2014-06-22
      Key fingerprint = 9AA9 BDB1 1BB1 B99A 2128  5A33 0664 A769 5426 5E8C

The entire key may be downloaded from:

To fetch the full new key from a public key server using GnuPG, run:

  gpg --keyserver --recv-key 54265e8c

If you already know my old key, you can now verify that the new key is
signed by the old one:

  gpg --check-sigs 54265e8c

If you are satisfied that you've got the right key, and the User IDs
match what you expect, I would appreciate it if you would sign my key:

  gpg --sign-key 54265e8c

You can upload your signatures to a public keyserver directly:

  gpg --keyserver --send-key 54265e8c

Or email (possibly encrypted) the output from:

  gpg --armor --export 54265e8c

If you'd like any further verification or have any questions about the
transition please contact me directly.

To verify the integrity of this statement:

  wget -q -O-|gpg --verify

Version: GnuPG v1.4.12 (GNU/Linux)


Creating a small JPEG photo for your OpenPGP key

I’m in the process of moving to a new OpenPGP key, and I want to include a small JPEG image of myself in it. The OpenPGP specification describes, in section 5.12.1 of RFC 4880, how an OpenPGP packet can contain an JPEG image. Unfortunately the document does not require or suggest any properties of images, nor does it warn about excessively large images. The GnuPG manual helpfully asserts that “Note that a very large JPEG will make for a very large key.”.

Researching this further, it seems that proprietary PGP program suggests 120×144 as the maximum size, although I haven’t found an authoritative source of that information. Looking at the GnuPG code, you can see that it suggests around 240×288 in a string saying “Keeping the image close to 240×288 is a good size to use”. Further, there is a warning displayed if the image is above 6144 bytes saying that “This JPEG is really large”.

I think the 6kb warning point is on the low side today, however without any more researched recommendation of image size, I’m inclined to go for a 6kb 240×288 image. Achieving this was not trivial, I ended up using GIMP to crop an image, resize it to 240×288, and then export it to JPEG. Chosing the relevant parameters during export is the tricky part. First, make sure to select ‘Show preview in image window’ so that you get a file size estimate and a preview of how the photo will look. I found the following settings useful for reducing size:

  • Disable “Save EXIF data”
  • Disable “Save thumbnail”
  • Disable “Save XMP data”
  • Change “Subsampling” from the default “4:4:4 (best quality)” to “4:2:0 (chroma quartered)”.
  • Try enabling only one of “Optimize” and “Progressive”. Sometimes I get best results disabling one and keeping the other enabled, and sometimes the other way around. I have not seen smaller size with both enabled, nor with both disabled.
  • Smooth the picture a bit to reduce pixel effects and size.
  • Change quality setting, I had to reduce it to around 25%.

See screenshot below of the settings windows.

GnuPG photo GIMP settings window

Eventually, I managed to get a photo that I was reasonable happy with. It is 240×288 and is 6048 bytes large.

GnuPG photo for Simon

If anyone has further information, or opinions, on what image sizes makes sense for OpenPGP photos, let me know. Ideas on how to reduce size of JPEG images further without reducing quality as much would be welcome.

Replicant 4.2 on Samsung S3

Since November 2013 I have been using Replicant on my Samsung S3 as an alternative OS. The experience has been good for everyday use. The limits (due to non-free software components) compared to a “normal” S3 (running vendor ROM or CyanogenMod) is lack of GPS/wifi/bluetooth/NFC/frontcamera functionality — although it is easy to get some of that working again, including GPS, which is nice for my geocaching hobby. The Replicant software is stable for being an Android platform; better than my Nexus 7 (2nd generation) tablet which I got around the same time that runs an unmodified version of Android. The S3 has crashed around ten times in these four months. I’ve lost track of the number of N7 crashes, especially after the upgrade to Android 4.4. I use the N7 significantly less than the S3, reinforcing my impression that Replicant is a stable Android. I have not had any other problem that I couldn’t explain, and have rarely had to reboot the device.

The Replicant project recently released version 4.2 and while I don’t expect the release to resolve any problem for me, I decided it was time to upgrade and learn something new. I initially tried the official ROM images, and later migrated to using my own build of the software (for no particular reason other than that I could).

Before the installation, I wanted to have a full backup of the phone to avoid losing data. I use SMS Backup+ to keep a backup of my call log, SMS and MMS on my own IMAP server. I use oandbackup to take a backup of all software and settings on the phone. I use DAVDroid for my contacts and calendar (using a Radicale server), and reluctantly still use aCal in order to access my Google Calendar (because Google does not implement RFC 5397 properly so it doesn’t work with DAVDroid). Alas all that software is not sufficient for backup purposes, for example photos are still not copied elsewhere. In order to have a complete backup of the phone, I’m using rsync over the android debug bridge (adb). More precisely, I connect the phone using a USB cable, push a rsyncd configuration file, start the rsync daemon on the phone, forward the TCP/IP port, and then launch rsync locally. The following commands are used:

jas@latte:~$ cat rsyncd.conf
uid = root
gid = root
path = /
jas@latte:~$ adb push rsyncd.conf /extSdCard/rsyncd.conf
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
0 KB/s (57 bytes in 0.059s)
jas@latte:~$ adb root
jas@latte:~$ adb shell rsync --daemon --no-detach --config=/extSdCard/rsyncd.conf &
jas@latte:~$ adb forward tcp:6010 tcp:873
jas@latte:~$ sudo rsync -av --delete --exclude /dev --exclude /acct --exclude /sys --exclude /proc rsync://localhost:6010/root/ /root/s3-bup/

Now feeling safe that I would not lose any data, I remove the SIM card from my phone (to avoid having calls, SMS or cell data interrupt during the installation) and follow the Replicant Samsung S3 installation documentation. Installation was straightforward. I booted up the newly installed ROM and familiarized myself with it. My first reaction was that the graphics felt a bit slower compared to Replicant 4.0, but it is hard to tell for certain.

After installation, I took a quick rsync backup of the freshly installed phone, to have a starting point for future backups. Since my IMAP and CardDav/CalDav servers use certificates signed by CACert I first had to install the CACert trust anchors, to get SMS Backup+ and DAVDroid to connect. For some reason it was not sufficient to add only the root CACert certificate, so I had to add the intermediate CA cert as well. To load the certs, I invoke the following commands, selecting ‘Install from SD Card’ when the menu is invoked (twice).

adb push root.crt /sdcard/
adb shell am start -n "\"\$\"SecuritySettingsActivity"
adb push class3.crt /sdcard/
adb shell am start -n "\"\$\"SecuritySettingsActivity"

I restore apps with oandbackup, and I select a set of important apps that I want restored with settings preserved, including aCal, K9, Xabber, c:geo, OsmAnd~, NewsBlur, Google Authenticator. I install SMS Backup+ from FDroid separately and configure it, SMS Backup+ doesn’t seem to want to restore anything if the app was restored with settings using oandbackup. I install and configure the DAVdroid account with the server URL, and watch it populate my address book and calendar with information.

After organizing the icons on the launcher screen, and changing the wallpaper, I’m up and running with Replicant 4.2. This upgrade effort took me around two evenings to complete, with around half of the time consumed by exploring different ways to do the rsync backup before I settled on the rsync daemon approach. Compared to the last time, when I spent almost two weeks researching various options and preparing for the install, this felt like a swift process.

Continue reading Replicant 4.2 on Samsung S3

Necrotizing Fasciitis

Dear World,

On the morning of December 24th I felt an unusual pain in my left hand between the thumb and forefinger. The pain increased and in the afternoon I got a high fever, at some point above 40 degrees Celsius or 104 degree Fahrenheit. I went to the emergency department and was hospitalized during the night between the 24th and 25th of December. On the afternoon of December 26th I underwent surgery to find out what was happening, and was then diagnosed with Necrotizing Fasciitis (the wikipedia article on NF gives a fair summary), caused by the common streptococcus bacteria (again see wikipedia article on Streptococcus). A popular name for the disease is flesh-eating bacteria. Necrotizing Fasciitis is a rare and aggresive infection, often deadly if left untreated, that can move through the body at speeds of a couple of centimeters per hour.

I have gone through 6 surgeries, leaving wounds all over my left hand and arm. I have felt afraid of what the disease will do to me, anxiety over what will happen in the future, confusion and uncertainty about how a disease like this can exist and that I get the right treatment since so little appears to be known about it. The feeling of loneliness and that nobody is helping, or even can help, has also been present. I have experienced pain. Even though pain is something I’m less afraid of (I have a back problem) compared to other feelings, I needed help from several pain killers. I’ve received normal Paracetamol, stronger NSAID’s (e.g., Ketorolac/Toradol), several Opioid pain-killers including Alfentanil/Rapifen, Tramadol/Tradolan, OxyContin/OxyNorm, and Morphine. After the first and second surgery, nothing helped and I was still screaming with pain and kicking the bed. After the first surgery, I received a local anesthetic (a plexus block). After the second surgery, the doctors did not want to masquerade my pain, because sign of pain indicate further growth of the infection, and I was given the pain-dissociative drug Ketamine/Ketalar and the stress-releasing Clonidine/Catapresan. Once the third surgery removed all of the infection, pain went down, and I experienced many positive feelings. I am very grateful to be alive. I felt a strong sense of inner power when I started to fight back against the decease. I find joy in even the simplest of things, like being able to drink water or seeing trees outside the window. I cried out of happiness when I saw our children’s room full of toys. I have learned many things about the human body, and I am curious by nature so I look forward to learn more. I hope to be able to draw strength from this incident, to help me prioritize better in my life.

My loving wife ├ůsa has gone through a nightmare as a consequence of my diagnosis. At day she had to cope with daily life taking care of our wonderful 1-year old daughter Ingrid and 3-year old boy Alfred. All three of them had various degrees of strep throat with fever, caused by the same bacteria — and anyone with young kids know how intense that alone can be. She gave me strength over the phone. She kept friends and relatives up to date about what happened, with the phone ringing all the time. She worked to get information out from the hospital about my status, sometimes being rudely treated and just being hanged up on. After a call with the doctor after the third surgery, when the infection had spread from the hand to the upper arm (5cm away from my torso), she started to plan for a life without me.

My last operation were on Thursday January 2nd and I left hospital the same day. I’m writing this on the Saturday of January 4rd, although some details and external links have been added after that. I have regained access to my arm and hand and doing rehab to regain muscle control, while my body is healing. I’m doing relaxation exercises to control pain and relax muscles, and took the last strong drug yesterday. Currently I take antibiotics (more precisely Clindamycin/Dalacin) and the common Paracetamol-based pain-killer Alvedon together with on-demand use of an also common NSAID containing Ibuprofen (Ipren). My wife and I were even out at a restaurant tonight.

Fortunately I was healthy when this started, and with bi-weekly training sessions for the last 2 years I was physically at my strongest peak in my 38 year old life (weighting 78kg or 170lb, height 182cm or 6 feet). I started working out to improve back issues, increase strength, and prepare for getting older. Exercise has never been my thing although I think it is fun to run medium distances (up to 10km).

I want thank everyone who helped me and our family through this, both professionally and personally, but I don’t know where to start. You know who you are. You are the reason I’m alive.

Naturally, I want to focus on getting well and spend time with my family now. I don’t yet know to what extent I will recover, but the prognosis is good. Don’t expect anything from me in the communities and organization that I’m active in (e.g., GNU, Debian, IETF, Yubico). I will come back as energy, time and priorities permits.