Automatic Replicant Backup over USB using rsync

I have been using Replicant on the Samsung SIII I9300 for over two years. I have written before on taking a backup of the phone using rsync but recently I automated my setup as described below. This work was prompted by a screen accident with my phone that caused it to die, and I noticed that I hadn’t taken regular backups. I did not lose any data this time, since typically all content I create on the device is immediately synchronized to my clouds. Photos are uploaded by the ownCloud app, SMS Backup+ saves SMS and call logs to my IMAP server, and I use DAVDroid for synchronizing contacts, calendar and task lists with my instance of ownCloud. Still, I strongly believe in regular backups of everything, so it was time to automate this.

For my use-case, taking backups of the phone whenever I connect it to one of my laptops is sufficient. I typically connect it to my laptops for charging at least every other day. My laptops are all running Debian, but this should be applicable to most modern GNU/Linux system. This is not Replicant-specific, although you need a rooted phone. I thought that automating this would be simple, but I got to learn the ins and outs of systemd and udev in the process and this ended up taking the better part of an evening.

I started out adding an udev rule and a small script, thinking I could invoke the backup process from the udev rule. However rsync would magically die after running a few seconds. After an embarrassing long debugging session, finally I found someone with a similar problem which led me to a nice writeup on the topic of running long-running services on udev events. I created a file /etc/udev/rules.d/99-android-backup.rules with the following content:

ACTION=="add", SUBSYSTEMS=="usb", ENV{ID_SERIAL_SHORT}=="323048a5ae82918b", TAG+="systemd", ENV{SYSTEMD_WANTS}+="android-backup@$env{ID_SERIAL_SHORT}.service"
ACTION=="add", SUBSYSTEMS=="usb", ENV{ID_SERIAL_SHORT}=="4df9e09c25e75f63", TAG+="systemd", ENV{SYSTEMD_WANTS}+="android-backup@$env{ID_SERIAL_SHORT}.service"

The serial numbers correspond to the device serial numbers of the two devices I wish to backup. The adb devices command will print them for you, and you need to replace my values with the values from your phones. Next I created a systemd service to describe a oneshot service. The file /etc/systemd/system/android-backup@.service have the following content:

[Service]
Type=oneshot
ExecStart=/usr/local/sbin/android-backup %I

The at-sign (“@”) in the service filename signal that this is a service that takes a parameter. I’m not enough of an udev/systemd person to explain these two files using the proper terminology, but at least you can pattern-match and follow the basic idea of them: the udev rule matches the devices that I’m interested in (I don’t want this to happen to all random Android devices I attach, hence matching against known serial numbers), and it causes a systemd service with a parameter to be started. The systemd service file describe the script to run, and passes on the parameter.

Now for the juicy part, the script. I have /usr/local/sbin/android-backup with the following content.

#!/bin/bash

DIRBASE=/var/backups/android
export ANDROID_SERIAL="$1"

exec 2>&1 | logger

if ! test -d "$DIRBASE-$ANDROID_SERIAL"; then
    echo "could not find directory: $DIRBASE-$ANDROID_SERIAL"
    exit 1
fi

set -x

adb wait-for-device
adb root
adb wait-for-device
adb shell printf "address 127.0.0.1\nuid = root\ngid = root\n[root]\n\tpath = /\n" \> /mnt/secure/rsyncd.conf
adb shell rsync --daemon --no-detach --config=/mnt/secure/rsyncd.conf &
adb forward tcp:6010 tcp:873
sleep 2
rsync -av --delete --exclude /dev --exclude /acct --exclude /sys --exclude /proc rsync://localhost:6010/root/ $DIRBASE-$ANDROID_SERIAL/
: rc $?
adb forward --remove tcp:6010
adb shell rm -f /mnt/secure/rsyncd.conf

This script warrant more detailed explanation. Backups are placed under, e.g., /var/backups/android-323048a5ae82918b/ for later off-site backup (you do backup your laptop, right?). You have to manually create this directory, as a safety catch to not wildly rsync data into non-existing directories. The script logs everything using syslog, so run a tail -F /var/log/syslog& when setting this up. You may want to reduce verbosity of rsync if you prefer (replace rsync -av with rsync -a). The script runs adb wait-for-device which you rightly guessed will wait for the device to settle. Next adb root is invoked to get root on the device (reading all files from the system naturally requires root). It takes some time to switch, so another wait-for-device call is needed. Next a small rsyncd configuration file is created in /mnt/secure/rsyncd.conf on the phone. The file tells rsync do listen on localhost, run as root, and use / as the path. By default, rsyncd is read-only so the host will not be able to upload any data over rsync, just read data out. Next rsync is started on the phone. The adb forward command forwards port 6010 on the laptop to port 873 on the phone (873 is the default rsyncd port). Unfortunately, setting up the TCP forward appears to take some time, and adb wait-for-device will not wait for that to complete, hence an ugly sleep 2 at this point. Next is the rsync invocation itself, which just pulls in everything from the phone to the laptop, excluding some usual suspects. The somewhat cryptic : rc $? merely logs the exit code of the rsync process into syslog. Finally we clean up the TCP forward and remove the rsyncd.conf file that was temporarily created.

This setup appears stable to me. I can plug in a phone and a backup will be taken. I can even plug in both my devices at the same time, and they will run at the same time. If I unplug a device, the script or rsync will error out and systemd cleans up.

If anyone has ideas on how to avoid the ugly temporary rsyncd.conf file or the ugly sleep 2, I’m interested. It would also be nice to not have to do the ‘adb root’ dance, and instead have the phone start the rsync daemon when connecting to my laptop somehow. TCP forwarding might be troublesome on a multi-user system, but my laptops aren’t. Killing rsync on the phone is probably a good idea too. If you have ideas on how to fix any of this, other feedback, or questions, please let me know!

Combining Dnsmasq and Unbound

For my home office network I have been using Dnsmasq for some time. Dnsmasq provides me with DNS, DHCP, DHCPv6, and IPv6 Router Advertisement. I run dnsmasq on a Debian Jessie server, but it works similar with OpenWRT if you want to use a smaller device. My entire /etc/dnsmasq.d/local configuration used to look like this:

dhcp-authoritative
interface=eth1
read-ethers
dhcp-range=192.168.1.100,192.168.1.150,12h
dhcp-range=2001:9b0:104:42::100,2001:9b0:104:42::1500
dhcp-option=option6:dns-server,[::]
enable-ra

Here dhcp-authoritative enable DHCP. interface=eth1 says to listen on eth1 only, which is my internal (IPv4 NAT) network. I try to keep track of the MAC address of all my devices in a /etc/ethers file, so I use read-ethers to have dnsmasq give stable IP addresses for them. The dhcp-range is used to enable DHCP and DHCPv6 on my internal network. The dhcp-option=option6:dns-server,[::] statement is needed to inform the DHCP clients of the DNS resolver’s IPv6 address, otherwise they would only get the IPv4 DNS server address. The enable-ra parameter enables IPv6 router advertisement on the internal network, thereby removing the need to run radvd too — useful since I prefer to use copyleft software.

Recently I had a desire to use DNSSEC, and enabled it in Dnsmasq using the following statements:

dnssec
trust-anchor=.,19036,8,2,49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5
dnssec-check-unsigned

The dnssec keyword enable DNSSEC validation in dnsmasq, using the indicated trust-anchor (get the root-anchors from IANA). The dnssec-check-unsigned deserves some more discussion. The dnsmasq manpage describes it as follows:

As a default, dnsmasq does not check that unsigned DNS replies are legitimate: they are assumed to be valid and passed on (without the “authentic data” bit set, of course). This does not protect against an attacker forging unsigned replies for signed DNS zones, but it is fast. If this flag is set, dnsmasq will check the zones of unsigned replies, to ensure that unsigned replies are allowed in those zones. The cost of this is more upstream queries and slower performance.

For example, this means that dnsmasq’s DNSSEC functionality is not secure against active man-in-the-middle attacks between dnsmasq and the DNS server it is using. Even if example.org used DNSSEC properly, an attacker could fake that it was unsigned to dnsmasq, and I would get potentially incorrect values in return. We all know that the Internet is not a secure place, and your threat model should include active attackers. I believe this mode should be the default in dnsmasq, and users should have to configure dnsmasq to not be in that mode if they really want to (with the obvious security warning).

Running with this enabled for a couple of days resulted in frustration about not being able to reach a couple of domains. The behaviour was that my clients would hang indefinitely or get a SERVFAIL, both resulting in lack of service. You can enable query logging in dnsmasq with log-queries and enabling this I noticed three distinct form of error patterns:

jow13gw dnsmasq 460 - -  forwarded www.fritidsresor.se to 213.80.101.3
jow13gw dnsmasq 460 - -  validation result is BOGUS
jow13gw dnsmasq 547 - -  reply cloudflare-dnssec.net is BOGUS DNSKEY
jow13gw dnsmasq 547 - -  validation result is BOGUS
jow13gw dnsmasq 547 - -  reply linux.conf.au is BOGUS DS
jow13gw dnsmasq 547 - -  validation result is BOGUS

The first only happened intermittently, the second did not cause any noticeable problem, and the final one was reproducible. To be fair, I only found the last example after starting to search for problem reports (see post confirming bug).

At this point, I had a confirmed bug in dnsmasq that affect my use-case. I want to use official packages from Debian on this machine, so installing newer versions manually is not an option. So I started to look into alternatives for DNS resolving, and quickly found Unbound. Installing it was easy:

apt-get install unbound
unbound-control-setup 

I created a local configuration file in /etc/unbound/unbound.conf.d/local.conf as follows:

server:
	interface: 127.0.0.1
	interface: ::1
	interface: 192.168.1.2
	interface: 2001:9b0:104:42::2
	access-control: 127.0.0.1 allow
	access-control: ::1 allow
	access-control: 192.168.1.2/24 allow
	access-control: 2001:9b0:104:42::2/64 allow
	outgoing-interface: 155.4.17.2
	outgoing-interface: 2001:9b0:1:1a04::2
#	log-queries: yes
#	verbosity: 2

The interface keyword determine which IP addresses to listen on, here I used the loopback interface and the local address of the physical network interface for my internal network. The access-control allows recursive DNS resolving from those networks. And outgoing-interface specify my external Internet-connected interface. log-queries and/or verbosity are useful for debugging.

To make things work, dnsmasq has to stop providing DNS services. This can be achieved with the port=0 keyword, however that will also disable informing DHCP clients about the DNS server to use. So this has to be added in manually. I ended up adding the two following lines to /etc/dnsmasq.d/local:

port=0
dhcp-option=option:dns-server,192.168.1.2

Restarting unbound and dnsmasq now leads to working (and secure) internal DNSSEC-aware name resolution over both IPv4 and IPv6. I can verify that resolution works, and that Unbound verify signatures and reject bad domains properly with dig as below, or use online DNSSEC resolver test page although I’m not sure how confident you can be in the result from that page.

$ host linux.conf.au
linux.conf.au has address 192.55.98.190
linux.conf.au mail is handled by 1 linux.org.au.
$ host sigfail.verteiltesysteme.net
;; connection timed out; no servers could be reached
$ 

I use Munin to monitor my services, and I was happy to find a nice Unbound Munin plugin. I installed the file in /usr/share/munin/plugins/ and created a Munin plugin configuration file /etc/munin/plugin-conf.d/unbound as follows:

[unbound*]
user root
env.statefile /var/lib/munin-node/plugin-state/root/unbound.state
env.unbound_conf /etc/unbound/unbound.conf
env.unbound_control /usr/sbin/unbound-control
env.spoof_warn 1000
env.spoof_crit 100000

I run munin-node-configure --shell|sh to enable it. To work unbound has to be configured as well, so I create a /etc/unbound/unbound.conf.d/munin.conf as follows.

server:
	extended-statistics: yes
	statistics-cumulative: no
	statistics-interval: 0
remote-control:
	control-enable: yes

The graphs may be viewed at my munin instance.

Cosmos – A Simple Configuration Management System

Back in early 2012 I had been helping with system administration of a number of Debian/Ubuntu-based machines, and the odd Solaris machine, for a couple of years at $DAYJOB. We had a combination of hand-written scripts, documentation notes that we cut’n’paste’d from during installation, and some locally maintained Debian packages for pulling in dependencies and providing some configuration files. As the number of people and machines involved grew, I realized that I wasn’t happy with how these machines were being administrated. If one of these machines would disappear in flames, it would take time (and more importantly, non-trivial manual labor) to get its services up and running again. I wanted a system that could automate the complete configuration of any Unix-like machine. It should require minimal human interaction. I wanted the configuration files to be version controlled. I wanted good security properties. I did not want to rely on a centralized server that would be a single point of failure. It had to be portable and be easy to get to work on new (and very old) platforms. It should be easy to modify a configuration file and get it deployed. I wanted it to be easy to start to use on an existing server. I wanted it to allow for incremental adoption. Surely this must exist, I thought.

During January 2012 I evaluated the existing configuration management systems around, like CFEngine, Chef, and Puppet. I don’t recall my reasons for rejecting each individual project, but needless to say I did not find what I was looking for. The reasons for rejecting the projects I looked at ranged from centralization concerns (single-point-of-failure central servers), bad security (no OpenPGP signing integration), to the feeling that the projects were too complex and hence fragile. I’m sure there were other reasons too.

In February I started going back to my original needs and tried to see if I could abstract something from the knowledge that was in all these notes, script snippets and local dpkg packages. I realized that the essence of what I wanted was one shell script per machine, OpenPGP signed, in a Git repository. I could check out that Git repository on every new machine that I wanted to configure, verify the OpenPGP signature of the shell script, and invoke the script. The script would do everything needed to get the machine up into an operational stage again, including package installation and configuration file changes. Since I would usually want to modify configuration files on a system even after its initial installation (hey not everyone is perfect), it was natural to extend this idea to a cron job that did ‘git pull’, verified the OpenPGP signature, and ran the script. The script would then have to be a bit more clever and not redo everything every time.

Since we had many machines, it was obvious that there would be huge code duplication between scripts. It felt natural to think of splitting up the shell script into a directory with many smaller shell scripts, and invoke each shell script in turn. Think of the /etc/init.d/ hierarchy and how it worked with System V initd. This would allow re-use of useful snippets across several machines. The next realization was that large parts of the shell script would be to create configuration files, such as /etc/network/interfaces. It would be easier to modify the content of those files if they were stored as files in a separate directory, an “overlay” stored in a sub-directory overlay/, and copied into the file system’s hierarchy with rsync. The final realization was that it made some sense to run one set of scripts before rsync’ing in the configuration files (to be able to install packages or set things up for the configuration files to make sense), and one set of scripts after the rsync (to perform tasks that require some package to be installed and configured). These set of scripts were called the “pre-tasks” and “post-tasks” respectively, and stored in sub-directories called pre-tasks.d/ and post-tasks.d/.

I started putting what would become Cosmos together during February 2012. Incidentally, I had been using etckeeper on our machines, and I had been reading its source code, and it greatly inspired the internal design of Cosmos. The git history shows well how the ideas evolved — even that Cosmos was initially called Eve but in retrospect I didn’t like the religious connotations — and there were a couple of rewrites on the way, but on the 28th of February I pushed out version 1.0. It was in total 778 lines of code, with at least 200 of those lines being the license boiler plate at the top of each file. Version 1.0 had a debian/ directory and I built the dpkg file and started to deploy on it some machines. There were a couple of small fixes in the next few days, but development stopped on March 5th 2012. We started to use Cosmos, and converted more and more machines to it, and I quickly also converted all of my home servers to use it. And even my laptops. It took until September 2014 to discover the first bug (the fix is a one-liner). Since then there haven’t been any real changes to the source code. It is in daily use today.

The README that comes with Cosmos gives a more hands-on approach on using it, which I hope will serve as a starting point if the above introduction sparked some interest. I hope to cover more about how to use Cosmos in a later blog post. Since Cosmos does so little on its own, to make sense of how to use it, you want to see a Git repository with machine models. If you want to see how the Git repository for my own machines looks you can see the sjd-cosmos repository. Don’t miss its README at the bottom. In particular, its global/ sub-directory contains some of the foundation, such as OpenPGP key trust handling.

SSH Host Certificates with YubiKey NEO

If you manage a bunch of server machines, you will undoubtedly have run into the following OpenSSH question:

The authenticity of host 'host.example.org (1.2.3.4)' can't be established.
RSA key fingerprint is 1b:9b:b8:5e:74:b1:31:19:35:48:48:ba:7d:d0:01:f5.
Are you sure you want to continue connecting (yes/no)?

If the server is a single-user machine, where you are the only person expected to login on it, answering “yes” once and then using the ~/.ssh/known_hosts file to record the key fingerprint will (sort-of) work and protect you against future man-in-the-middle attacks. I say sort-of, since if you want to access the server from multiple machines, you will need to sync the known_hosts file somehow. And once your organization grows larger, and you aren’t the only person that needs to login, having a policy that everyone just answers “yes” on first connection on all their machines is bad. The risk that someone is able to successfully MITM attack you grows every time someone types “yes” to these prompts.

Setting up one (or more) SSH Certificate Authority (CA) to create SSH Host Certificates, and have your users trust this CA, will allow you and your users to automatically trust the fingerprint of the host through the indirection of the SSH Host CA. I was surprised (but probably shouldn’t have been) to find that deploying this is straightforward. Even setting this up with hardware-backed keys, stored on a YubiKey NEO, is easy. Below I will explain how to set this up for a hypothethical organization where two persons (sysadmins) are responsible for installing and configuring machines.

I’m going to assume that you already have a couple of hosts up and running and that they run the OpenSSH daemon, so they have a /etc/ssh/ssh_host_rsa_key* public/private keypair, and that you have one YubiKey NEO with the PIV applet and that the NEO is in CCID mode. I don’t believe it matters, but I’m running a combination of Debian and Ubuntu machines. The Yubico PIV tool is used to configure the YubiKey NEO, and I will be using OpenSC‘s PKCS#11 library to connect OpenSSH with the YubiKey NEO. Let’s install some tools:

apt-get install yubikey-personalization yubico-piv-tool opensc-pkcs11 pcscd

Every person responsible for signing SSH Host Certificates in your organization needs a YubiKey NEO. For my example, there will only be two persons, but the number could be larger. Each one of them will have to go through the following process.

The first step is to prepare the NEO. First mode switch it to CCID using some device configuration tool, like yubikey-personalization.

ykpersonalize -m1

Then prepare the PIV applet in the YubiKey NEO. This is covered by the YubiKey NEO PIV Introduction but I’ll reproduce the commands below. Do this on a disconnected machine, saving all files generated on one or more secure media and store that in a safe.

user=simon
key=`dd if=/dev/random bs=1 count=24 2>/dev/null | hexdump -v -e '/1 "%02X"'`
echo $key > ssh-$user-key.txt
pin=`dd if=/dev/random bs=1 count=6 2>/dev/null | hexdump -v -e '/1 "%u"'|cut -c1-6`
echo $pin > ssh-$user-pin.txt
puk=`dd if=/dev/random bs=1 count=6 2>/dev/null | hexdump -v -e '/1 "%u"'|cut -c1-8`
echo $puk > ssh-$user-puk.txt

yubico-piv-tool -a set-mgm-key -n $key
yubico-piv-tool -k $key -a change-pin -P 123456 -N $pin
yubico-piv-tool -k $key -a change-puk -P 12345678 -N $puk

Then generate a RSA private key for the SSH Host CA, and generate a dummy X.509 certificate for that key. The only use for the X.509 certificate is to make PIV/PKCS#11 happy — they want to be able to extract the public-key from the smartcard, and do that through the X.509 certificate.

openssl genrsa -out ssh-$user-ca-key.pem 2048
openssl req -new -x509 -batch -key ssh-$user-ca-key.pem -out ssh-$user-ca-crt.pem

You import the key and certificate to the PIV applet as follows:

yubico-piv-tool -k $key -a import-key -s 9c < ssh-$user-ca-key.pem
yubico-piv-tool -k $key -a import-certificate -s 9c < ssh-$user-ca-crt.pem

You now have a SSH Host CA ready to go! The first thing you want to do is to extract the public-key for the CA, and you use OpenSSH's ssh-keygen for this, specifying OpenSC's PKCS#11 module.

ssh-keygen -D /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so -e > ssh-$user-ca-key.pub

If you happen to use YubiKey NEO with OpenPGP using gpg-agent/scdaemon, you may get the following error message:

no slots
cannot read public key from pkcs11

The reason is that scdaemon exclusively locks the smartcard, so no other application can access it. You need to kill scdaemon, which can be done as follows:

gpg-connect-agent SCD KILLSCD SCD BYE /bye

The output from ssh-keygen may look like this:

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCp+gbwBHova/OnWMj99A6HbeMAGE7eP3S9lKm4/fk86Qd9bzzNNz2TKHM7V1IMEj0GxeiagDC9FMVIcbg5OaSDkuT0wGzLAJWgY2Fn3AksgA6cjA3fYQCKw0Kq4/ySFX+Zb+A8zhJgCkMWT0ZB0ZEWi4zFbG4D/q6IvCAZBtdRKkj8nJtT5l3D3TGPXCWa2A2pptGVDgs+0FYbHX0ynD0KfB4PmtR4fVQyGJjJ0MbF7fXFzQVcWiBtui8WR/Np9tvYLUJHkAXY/FjLOZf9ye0jLgP1yE10+ihe7BCxkM79GU9BsyRgRt3oArawUuU6tLgkaMN8kZPKAdq0wxNauFtH

Now all your users in your organization needs to add a line to their ~/.ssh/known_hosts as follows:

@cert-authority *.example.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCp+gbwBHova/OnWMj99A6HbeMAGE7eP3S9lKm4/fk86Qd9bzzNNz2TKHM7V1IMEj0GxeiagDC9FMVIcbg5OaSDkuT0wGzLAJWgY2Fn3AksgA6cjA3fYQCKw0Kq4/ySFX+Zb+A8zhJgCkMWT0ZB0ZEWi4zFbG4D/q6IvCAZBtdRKkj8nJtT5l3D3TGPXCWa2A2pptGVDgs+0FYbHX0ynD0KfB4PmtR4fVQyGJjJ0MbF7fXFzQVcWiBtui8WR/Np9tvYLUJHkAXY/FjLOZf9ye0jLgP1yE10+ihe7BCxkM79GU9BsyRgRt3oArawUuU6tLgkaMN8kZPKAdq0wxNauFtH

Each sysadmin needs to go through this process, and each user needs to add one line for each sysadmin. While you could put the same key/certificate on multiple YubiKey NEOs, to allow users to only have to put one line into their file, dealing with revocation becomes a bit more complicated if you do that. If you have multiple CA keys in use at the same time, you can roll over to new CA keys without disturbing production. Users may also have different policies for different machines, so that not all sysadmins have the power to create host keys for all machines in your organization.

The CA setup is now complete, however it isn't doing anything on its own. We need to sign some host keys using the CA, and to configure the hosts' sshd to use them. What you could do is something like this, for every host host.example.com that you want to create keys for:

h=host.example.com
scp root@$h:/etc/ssh/ssh_host_rsa_key.pub .
gpg-connect-agent "SCD KILLSCD" "SCD BYE" /bye
ssh-keygen -D /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so -s ssh-$user-ca-key.pub -I $h -h -n $h -V +52w ssh_host_rsa_key.pub
scp ssh_host_rsa_key-cert.pub root@$h:/etc/ssh/

The ssh-keygen command will use OpenSC's PKCS#11 library to talk to the PIV applet on the NEO, and it will prompt you for the PIN. Enter the PIN that you set above. The output of the command would be something like this:

Enter PIN for 'PIV_II (PIV Card Holder pin)': 
Signed host key ssh_host_rsa_key-cert.pub: id "host.example.com" serial 0 for host.example.com valid from 2015-06-16T13:39:00 to 2016-06-14T13:40:58

The host now has a SSH Host Certificate installed. To use it, you must make sure that /etc/ssh/sshd_config has the following line:

HostCertificate /etc/ssh/ssh_host_rsa_key-cert.pub

You need to restart sshd to apply the configuration change. If you now try to connect to the host, you will likely still use the known_hosts fingerprint approach. So remove the fingerprint from your machine:

ssh-keygen -R $h

Now if you attempt to ssh to the host, and using the -v parameter to ssh, you will see the following:

debug1: Server host key: RSA-CERT 1b:9b:b8:5e:74:b1:31:19:35:48:48:ba:7d:d0:01:f5
debug1: Host 'host.example.com' is known and matches the RSA-CERT host certificate.

Success!

One aspect that may warrant further discussion is the host keys. Here I only created host certificates for the hosts' RSA key. You could create host certificate for the DSA, ECDSA and Ed25519 keys as well. The reason I did not do that was that in this organization, we all used GnuPG's gpg-agent/scdaemon with YubiKey NEO's OpenPGP Card Applet with RSA keys for user authentication. So only the host RSA key is relevant.

Revocation of a YubiKey NEO key is implemented by asking users to drop the corresponding line for one of the sysadmins, and regenerate the host certificate for the hosts that the sysadmin had created host certificates for. This is one reason users should have at least two CAs for your organization that they trust for signing host certificates, so they can migrate away from one of them to the other without interrupting operations.

Scrypt in IETF

Colin Percival and I have worked on an internet-draft on scrypt for some time. I realize now that the -00 draft was published over two years ago, turning this effort today somewhat into archeology rather than rocket science. Still, having a published RFC that is easy to refer to from other Internet protocols will hopefully help to establish the point that PBKDF2 alone no longer provides state-of-the-art protection for password hashing.

I have written about password hashing before where I give a quick introduction to the basic concepts in the context of the well-known PBKDF2 algorithm. The novelty in scrypt is that it is designed to combat brute force and hardware accelerated attacks on hashed password databases. Briefly, scrypt expands the password and salt (using PBKDF2 as a component) and then uses that to create a large array (typically tens or hundreds of megabytes) using the Salsa20 core hash function and then de-references that large array in a random and sequential pattern. There are three parameters to the scrypt function: a CPU/Memory cost parameter N (varies, typical values are 16384 or 1048576), a blocksize parameter r (typically 8), and a parallelization parameter p (typically a low number like 1 or 16). The process is described in the draft, and there are further discussions in Colin’s original scrypt paper.

The document has been stable for some time, and we are now asking for it to be published. Thus now is good time to provide us with feedback on the document. The live document on gitlab is available if you want to send us a patch.