OpenPGP smartcard under GNOME on Debian 10 Buster

Debian buster is almost released, and today I celebrate midsummer by installing (a pre-release) of it on my Lenovo X201 laptop. Everything went smooth, except for the usual issues with smartcards under GNOME. I use a FST-01G running Gnuk, but the same issue apply to all OpenPGP cards including YubiKeys. I wrote about this problem for earlier releases, read Smartcards on Debian 9 Stretch and Smartcards on Debian 8 Jessie. Some things have changed – now GnuPG‘s internal ccid support works, and dirmngr is installed by default when you install Debian with GNOME. I thought I’d write a new post for the new release.

After installing Debian and logging into GNOME, I start a terminal and attempt to use the smartcard as follows.

jas@latte:~$ gpg --card-status
gpg: error getting version from 'scdaemon': No SmartCard daemon
gpg: OpenPGP card not available: No SmartCard daemon
jas@latte:~$ 

The reason is that the scdaemon package is not installed. Install it as follows.

jas@latte:~$ sudo apt-get install scdaemon

After this, gpg --card-status works. It is now using GnuPG’s internal CCID library, which appears to be working. The pcscd package is not required to get things working any more — however installing it also works, and you might need pcscd if you use other applications that talks to the smartcard.

jas@latte:~$ Reader ...........: Free Software Initiative of Japan Gnuk (FSIJ-1.2.14-67252015) 00 00
Application ID ...: D276000124010200FFFE672520150000
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 67252015
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Sex ..............: man
URL of public key : https://josefsson.org/key-20190320.txt
Login data .......: jas
Signature PIN ....: inte tvingad
Key attributes ...: ed25519 cv25519 ed25519
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 710
KDF setting ......: off
Signature key ....: A3CC 9C87 0B9D 310A BAD4  CF2F 5172 2B08 FE47 45A2
      created ....: 2019-03-20 23:40:49
Encryption key....: A9EC 8F4D 7F1E 50ED 3DEF  49A9 0292 3D7E E76E BD60
      created ....: 2019-03-20 23:40:26
Authentication key: CA7E 3716 4342 DF31 33DF  3497 8026 0EE8 A9B9 2B2B
      created ....: 2019-03-20 23:40:37
General key info..: [none]
jas@latte:~$ 

As before, using the key does not work right away:

jas@latte:~$ echo foo|gpg -a --sign
gpg: no default secret key: No public key
gpg: signing failed: No public key
jas@latte:~$ 

This is because GnuPG does not have the public key that correspond to the private key inside the smartcard.

jas@latte:~$ gpg --list-keys
jas@latte:~$ gpg --list-secret-keys
jas@latte:~$ 

You may retrieve your public key from the clouds as follows. With Debian Buster, the dirmngr package is installed by default so there is no need to install it. Alternatively, if you configured your smartcard with a public key URL that works, you may type “retrieve” into the gpg --card-edit interactive interface. This could be considered slightly more reliable (at least from a self-hosting point of view), because it uses your configured URL for retrieving the public key rather than trusting clouds.

jas@latte:~$ gpg --recv-keys "A3CC 9C87 0B9D 310A BAD4  CF2F 5172 2B08 FE47 45A2"
gpg: key D73CF638C53C06BE: public key "Simon Josefsson <simon@josefsson.org>" imported
gpg: marginals needed: 3  completes needed: 1  trust model: pgp
gpg: depth: 0  valid:   2  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 2u
gpg: next trustdb check due at 2019-10-22
gpg: Total number processed: 1
gpg:               imported: 1
jas@latte:~$ 

Now signing with the smart card works! Yay! Btw: compare the output size with the output size in the previous post to understand the size advantage with Ed25519 over RSA.

jas@latte:~$ echo foo|gpg -a --sign
-----BEGIN PGP MESSAGE-----

owGbwMvMwCEWWKTN8c/ddRHjaa4khlieP//S8vO5OkpZGMQ4GGTFFFkWn5nTzj3X
kGvXlfP6MLWsTCCFDFycAjARscUM/5MnXTF9aSG4ScVa3sDiB2//nPSVz13Mkpbo
nlzSezowRZrhn+Ky7/O6M7XljzzJvtJhfPvOyS+rpyqJlD+buumL+/eOPywA
=+WN7
-----END PGP MESSAGE-----

As before, encrypting to myself does not work smoothly because of the trust setting on the public key. Witness the problem here:

jas@latte:~$ echo foo|gpg -a --encrypt -r simon@josefsson.org
gpg: 02923D7EE76EBD60: There is no assurance this key belongs to the named user

sub  cv25519/02923D7EE76EBD60 2019-03-20 Simon Josefsson <simon@josefsson.org>
 Primary key fingerprint: B1D2 BD13 75BE CB78 4CF4  F8C4 D73C F638 C53C 06BE
      Subkey fingerprint: A9EC 8F4D 7F1E 50ED 3DEF  49A9 0292 3D7E E76E BD60

It is NOT certain that the key belongs to the person named
in the user ID.  If you *really* know what you are doing,
you may answer the next question with yes.

Use this key anyway? (y/N) 
gpg: signal Interrupt caught ... exiting

jas@latte:~$

You update the trust setting with the gpg --edit-key command. Take note that this is not the general way of getting rid of the “There is no assurance this key belongs to the named user” warning — using a ultimate trust setting is normally only relevant for your own keys, which is the case here.

jas@latte:~$ gpg --edit-key simon@josefsson.org
gpg (GnuPG) 2.2.12; Copyright (C) 2018 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret subkeys are available.

pub  ed25519/D73CF638C53C06BE
     created: 2019-03-20  expires: 2019-10-22  usage: SC  
     trust: unknown       validity: unknown
ssb  cv25519/02923D7EE76EBD60
     created: 2019-03-20  expires: 2019-10-22  usage: E   
     card-no: FFFE 67252015
ssb  ed25519/80260EE8A9B92B2B
     created: 2019-03-20  expires: 2019-10-22  usage: A   
     card-no: FFFE 67252015
ssb  ed25519/51722B08FE4745A2
     created: 2019-03-20  expires: 2019-10-22  usage: S   
     card-no: FFFE 67252015
[ unknown] (1). Simon Josefsson <simon@josefsson.org>

gpg> trust
pub  ed25519/D73CF638C53C06BE
     created: 2019-03-20  expires: 2019-10-22  usage: SC  
     trust: unknown       validity: unknown
ssb  cv25519/02923D7EE76EBD60
     created: 2019-03-20  expires: 2019-10-22  usage: E   
     card-no: FFFE 67252015
ssb  ed25519/80260EE8A9B92B2B
     created: 2019-03-20  expires: 2019-10-22  usage: A   
     card-no: FFFE 67252015
ssb  ed25519/51722B08FE4745A2
     created: 2019-03-20  expires: 2019-10-22  usage: S   
     card-no: FFFE 67252015
[ unknown] (1). Simon Josefsson <simon@josefsson.org>

Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)

  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu

Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y

pub  ed25519/D73CF638C53C06BE
     created: 2019-03-20  expires: 2019-10-22  usage: SC  
     trust: ultimate      validity: unknown
ssb  cv25519/02923D7EE76EBD60
     created: 2019-03-20  expires: 2019-10-22  usage: E   
     card-no: FFFE 67252015
ssb  ed25519/80260EE8A9B92B2B
     created: 2019-03-20  expires: 2019-10-22  usage: A   
     card-no: FFFE 67252015
ssb  ed25519/51722B08FE4745A2
     created: 2019-03-20  expires: 2019-10-22  usage: S   
     card-no: FFFE 67252015
[ unknown] (1). Simon Josefsson <simon@josefsson.org>
Please note that the shown key validity is not necessarily correct
unless you restart the program.

gpg> quit
jas@latte:~$

Confirm gpg --list-keys indicate that the key is now trusted, and encrypting to yourself should work.

jas@latte:~$ gpg --list-keys
/home/jas/.gnupg/pubring.kbx
----------------------------
pub   ed25519 2019-03-20 [SC] [expires: 2019-10-22]
      B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
uid           [ultimate] Simon Josefsson <simon@josefsson.org>
sub   ed25519 2019-03-20 [A] [expires: 2019-10-22]
sub   ed25519 2019-03-20 [S] [expires: 2019-10-22]
sub   cv25519 2019-03-20 [E] [expires: 2019-10-22]

jas@latte:~$ gpg --list-secret-keys
/home/jas/.gnupg/pubring.kbx
----------------------------
sec#  ed25519 2019-03-20 [SC] [expires: 2019-10-22]
      B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
uid           [ultimate] Simon Josefsson <simon@josefsson.org>
ssb>  ed25519 2019-03-20 [A] [expires: 2019-10-22]
ssb>  ed25519 2019-03-20 [S] [expires: 2019-10-22]
ssb>  cv25519 2019-03-20 [E] [expires: 2019-10-22]

jas@latte:~$ echo foo|gpg -a --encrypt -r simon@josefsson.org
gpg: checking the trustdb
gpg: marginals needed: 3  completes needed: 1  trust model: pgp
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: next trustdb check due at 2019-10-22
-----BEGIN PGP MESSAGE-----

hF4DApI9fuduvWASAQdA4FIwM27EFqNK1I5eZERaZVDAXJDmYLZQHjZD8TexT3gw
7SDaeTLm7s0QSyKtsRugRpex6eSVhfA3WG8fUOyzbNv4o7AC/TQdhZ2TDtXZGFtY
0j8BRYIjVDbYOIp1NM3kHnMGHWEJRsTbtLCitMWmLdp4C98DE/uVkwjw98xEJauR
/9ZNmmvzuWpaHuEJNiFjORA=
=tAXh
-----END PGP MESSAGE-----
jas@latte:~$ 

The issue with OpenSSH and GNOME Keyring still exists as in previous releases.

jas@latte:~$ ssh-add -L
The agent has no identities.
jas@latte:~$ echo $SSH_AUTH_SOCK 
/run/user/1000/keyring/ssh
jas@latte:~$ 

The trick we used last time still works, and as far as I can tell, it is still the only recommended method to disable the gnome-keyring ssh component. Notice how we also configure GnuPG’s gpg-agent to enable SSH daemon support.

jas@latte:~$ mkdir ~/.config/autostart
jas@latte:~$ cp /etc/xdg/autostart/gnome-keyring-ssh.desktop ~/.config/autostart/
jas@latte:~$ echo 'Hidden=true' >> ~/.config/autostart/gnome-keyring-ssh.desktop 
jas@latte:~$ echo enable-ssh-support >> ~/.gnupg/gpg-agent.conf 

Log out of GNOME and log in again. Now the environment variable points to gpg-agent’s socket, and SSH authentication using the smartcard works.

jas@latte:~$ echo $SSH_AUTH_SOCK 
/run/user/1000/gnupg/S.gpg-agent.ssh
jas@latte:~$ ssh-add -L
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILzCFcHHrKzVSPDDarZPYqn89H5TPaxwcORgRg+4DagE cardno:FFFE67252015
jas@latte:~$ 

Topics for further discussion and research this time around includes:

  1. Should scdaemon (and possibly pcscd) be pre-installed on Debian desktop systems?
  2. Could gpg --card-status attempt to import the public key and secret key stub automatically? Alternatively, some new command that automate the bootstrapping of a new smartcard.
  3. Should GNOME keyring support smartcards?
  4. Why is GNOME keyring used by default for SSH rather than gpg-agent?
  5. Should gpg-agent default to enable the SSH daemon?
  6. What could be done to automatically infer the trust setting for a smartcard based private key?

Thanks for reading and happy smartcarding!

Offline Ed25519 OpenPGP key with subkeys on FST-01G running Gnuk

Below I describe how to generate an OpenPGP key and import it to a FST-01G device running Gnuk. See my earlier post on planning for my new OpenPGP key and the post on preparing the FST-01G to run Gnuk. For comparison with a RSA/YubiKey based approach, you can read about my setup from 2014.

Most of the steps below are covered by the Gnuk manual. The primary complication for me is the use of a offline machine and storing GnuPG directory stored on a USB memory device.

Offline machine

I use a laptop that is not connected to the Internet and boot it from a read-only USB memory stick. Finding a live CD that contains the necessary tools for using GnuPG with smartcards (gpg-agent, scdaemon, pcscd) is significantly harder than it should be. Using a rarely audited image begs the question of whether you can trust it. A patched kernel/gpg to generate poor randomness would be an easy and hard to notice hack. I’m using the PGP/PKI Clean Room Live CD. Recommendations on more widely used and audited alternatives would be appreciated. Select “Advanced Options” and “Run Shell” to escape the menus. Insert a new USB memory device, and prepare it as follows:

pgp@pgplive:/home/pgp$ sudo wipefs -a /dev/sdX
pgp@pgplive:/home/pgp$ sudo fdisk /dev/sdX
# create a primary partition of Linux type
pgp@pgplive:/home/pgp$ sudo mkfs.ext4 /dev/sdX1
pgp@pgplive:/home/pgp$ sudo mount /dev/sdX1 /mnt
pgp@pgplive:/home/pgp$ sudo mkdir /mnt/gnupghome
pgp@pgplive:/home/pgp$ sudo chown pgp.pgp /mnt/gnupghome
pgp@pgplive:/home/pgp$ sudo chmod go-rwx /mnt/gnupghome

GnuPG configuration

Set your GnuPG home directory to point to the gnupghome directory on the USB memory device. You will need to do this in every terminal windows you open that you want to use GnuPG in.

pgp@pgplive:/home/pgp$ export GNUPGHOME=/mnt/gnupghome
pgp@pgplive:/home/pgp$

At this point, you should be able to run gpg --card-status and get output from the smartcard.

Create master key

Create a master key and make a backup copy of the GnuPG home directory with it, together with an export ASCII version.

pgp@pgplive:/home/pgp$ gpg --quick-gen-key "Simon Josefsson <simon@josefsson.org>" ed25519 sign 216d
gpg: keybox '/mnt/gnupghome/pubring.kbx' created
gpg: /mnt/gnupghome/trustdb.gpg: trustdb created
gpg: key D73CF638C53C06BE marked as ultimately trusted
gpg: directory '/mnt/gnupghome/openpgp-revocs.d' created
gpg: revocation certificate stored as '/mnt/gnupghome/openpgp-revocs.d/B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE.rev'
pub   ed25519 2019-03-20 [SC] [expires: 2019-10-22]
      B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
      B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
uid                      Simon Josefsson <simon@josefsson.org>

pgp@pgplive:/home/pgp$ gpg -a --export-secret-keys B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE > $GNUPGHOME/masterkey.txt
pgp@pgplive:/home/pgp$ sudo cp -a $GNUPGHOME $GNUPGHOME-backup-masterkey
pgp@pgplive:/home/pgp$ 

Create subkeys

Create subkeys and make a backup of them too, as follows.

pgp@pgplive:/home/pgp$ gpg --quick-add-key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE cv25519 encr 216d
pgp@pgplive:/home/pgp$ gpg --quick-add-key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE ed25519 auth 216d
pgp@pgplive:/home/pgp$ gpg --quick-add-key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE ed25519 sign 216d
pgp@pgplive:/home/pgp$ gpg -a --export-secret-keys B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE > $GNUPGHOME/mastersubkeys.txt
pgp@pgplive:/home/pgp$ gpg -a --export-secret-subkeys B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE > $GNUPGHOME/subkeys.txt
pgp@pgplive:/home/pgp$ sudo cp -a $GNUPGHOME $GNUPGHOME-backup-mastersubkeys
pgp@pgplive:/home/pgp$ 

Move keys to card

Prepare the card by setting Admin PIN, PIN, your full name, sex, login account, and key URL as you prefer, following the Gnuk manual on card personalization.

Move the subkeys from your GnuPG keyring to the FST01G using the keytocard command.

Take a final backup — because moving the subkeys to the card modifes the local GnuPG keyring — and create a ASCII armored version of the public key, to be transferred to your daily machine.

pgp@pgplive:/home/pgp$ gpg --list-secret-keys
/mnt/gnupghome/pubring.kbx
--------------------------
sec   ed25519 2019-03-20 [SC] [expires: 2019-10-22]
      B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
uid           [ultimate] Simon Josefsson <simon@josefsson.org>
ssb>  cv25519 2019-03-20 [E] [expires: 2019-10-22]
ssb>  ed25519 2019-03-20 [A] [expires: 2019-10-22]
ssb>  ed25519 2019-03-20 [S] [expires: 2019-10-22]

pgp@pgplive:/home/pgp$ gpg -a --export-secret-keys B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE > $GNUPGHOME/masterstubs.txt
pgp@pgplive:/home/pgp$ gpg -a --export-secret-subkeys B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE > $GNUPGHOME/subkeysstubs.txt
pgp@pgplive:/home/pgp$ gpg -a --export B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE > $GNUPGHOME/publickey.txt
pgp@pgplive:/home/pgp$ cp -a $GNUPGHOME $GNUPGHOME-backup-masterstubs
pgp@pgplive:/home/pgp$ 

Transfer to daily machine

Copy publickey.txt to your day-to-day laptop and import it and create stubs using --card-status.

jas@latte:~$ gpg --import < publickey.txt 
gpg: key D73CF638C53C06BE: public key "Simon Josefsson <simon@josefsson.org>" imported
gpg: Total number processed: 1
gpg:               imported: 1
jas@latte:~$ gpg --card-status

Reader ...........: Free Software Initiative of Japan Gnuk (FSIJ-1.2.14-67252015) 00 00
Application ID ...: D276000124010200FFFE672520150000
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 67252015
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Sex ..............: male
URL of public key : https://josefsson.org/key-20190320.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: ed25519 cv25519 ed25519
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 0
Signature key ....: A3CC 9C87 0B9D 310A BAD4  CF2F 5172 2B08 FE47 45A2
      created ....: 2019-03-20 23:40:49
Encryption key....: A9EC 8F4D 7F1E 50ED 3DEF  49A9 0292 3D7E E76E BD60
      created ....: 2019-03-20 23:40:26
Authentication key: CA7E 3716 4342 DF31 33DF  3497 8026 0EE8 A9B9 2B2B
      created ....: 2019-03-20 23:40:37
General key info..: sub  ed25519/51722B08FE4745A2 2019-03-20 Simon Josefsson <simon@josefsson.org>
sec   ed25519/D73CF638C53C06BE  created: 2019-03-20  expires: 2019-10-22
ssb>  cv25519/02923D7EE76EBD60  created: 2019-03-20  expires: 2019-10-22
                                card-no: FFFE 67252015
ssb>  ed25519/80260EE8A9B92B2B  created: 2019-03-20  expires: 2019-10-22
                                card-no: FFFE 67252015
ssb>  ed25519/51722B08FE4745A2  created: 2019-03-20  expires: 2019-10-22
                                card-no: FFFE 67252015
jas@latte:~$ 

Before the key can be used after the import, you must update the trust database for the secret key.

Now you should have a offline master key with subkey stubs. Note in the output below that the master key is not available (sec#) and the subkeys are stubs for smartcard keys (ssb>).

jas@latte:~$ gpg --list-secret-keys
sec#  ed25519 2019-03-20 [SC] [expires: 2019-10-22]
      B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
uid           [ultimate] Simon Josefsson <simon@josefsson.org>
ssb>  cv25519 2019-03-20 [E] [expires: 2019-10-22]
ssb>  ed25519 2019-03-20 [A] [expires: 2019-10-22]
ssb>  ed25519 2019-03-20 [S] [expires: 2019-10-22]

jas@latte:~$

If your environment variables are setup correctly, SSH should find the authentication key automatically.

jas@latte:~$ ssh-add -L
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILzCFcHHrKzVSPDDarZPYqn89H5TPaxwcORgRg+4DagE cardno:FFFE67252015
jas@latte:~$ 

GnuPG and SSH are now ready to be used with the new key. Thanks for reading!

Vikings D16 server first impressions

I have bought a 1U server to use as a virtualization platform to host my personal online services (mail, web, DNS, nextCloud, Icinga, Munin etc). This is the first time I have used a high-end libre hardware device that has been certified with the Respects Your Freedom certification, by the Free Software Foundation. To inspire others to buy a similar machine, I have written about my experience with the machine.

The machine I bought has a ASUS KGPE D16 mainboard with modified (liberated) BIOS. I bought it from Vikings.net. Ordering the server was uneventful. I ordered it with two AMD 6278 processors (the Wikipedia AMD Opteron page contains useful CPU information), 128GB of ECC RAM, and a PIKE 2008/IMR RAID controller to improve SATA speed (to be verified). I intend to use it with two 1TB Samsung 850 SSDs and two 5TB Seagate ST5000, configured in RAID1 mode. I was worried that the SATA controller(s) would not be able handle >2TB devices, which is something I have had bad experiences with older Dell RAID controllers before. The manufacturer wasn’t able to confirm that they would work, but I took the risk and went ahead with the order anyway.

One of the order configuration choices was which BIOS to use. I chose their recommended “Petitboot & Coreboot (de-blobbed)” option. The other choices were “Coreboot (de-blobbed)” and “Libreboot”. I am still learning about the BIOS alternatives, and my goal is to compare the various alternatives and eventually compile my own preferred choice. The choice of BIOS still leaves me with a desire to understand more. Petitboot appears more advanced, and has an embedded real Linux kernel and small rescue system on it (hence it requires a larger 16MB BIOS chip). Coreboot is a well known project, but it appears it does not have a strict FOSS policy so there is non-free code in it. Libreboot is a de-blobbed coreboot, and appears to fit the bill for me, but it does not appear to have a large community around it and might not be as updated as coreboot.

The PIKE2008 controll card did not fit with the 1U case that Vikings.net had found for me, so someone on their side must have had a nice day of hardware hacking. The cooler for one chip had a dent in it, which could imply damage to the chip or mainboard. The chip is close to the RAID controller where they modified the 1U case, so I was worried that some physical force had been applied there.

First impressions of using the machine for a couple of days:

  • The graphical installation of Debian 9.x stretch does not start. There is a X11 stack backtrace on booting the ISO netinst image, and I don’t know how to turn the installer into text mode from within the Petitboot boot menu.
  • Petitboot does not appear to detect a bootable system inside a RAID partition, which I have reported. I am now using a raw ext4 /boot partition on one of the SSDs to boot.
  • Debian 8.x jessie installs fine, since it uses text-mode. See my jessie installation report.
  • The graphical part of GRUB in debian 8.x makes graphics not work anymore, so I can’t see the GRUB screen or interact with the booted Debian installation.
  • Reboot time is around 2 minutes and 20 seconds between rsyslogd shuts down and until it starts again.
  • On every other boot (it is fairly stable at 50%) I get the following kernel log message every other second. The 00:14.0 device is the SBx00 SMBus Controller according to lspci, but what this means is a mystery to me.
    AMD-Vi: Event logged [IO_PAGE_FAULT device=00:14.0 domain=0x000a address=0x000000fdf9103300 flags=0x0030]
    

That’s it for now! My goal is to get Debian 9.x stretch installed on the machine and perform some heavy duty load testing of the machine before putting it into production. Expect an update if I discover something interesting!

OpenPGP smartcard under GNOME on Debian 9.0 Stretch

I installed Debian 9.0 “Stretch” on my Lenovo X201 laptop today. Installation went smooth, as usual. GnuPG/SSH with an OpenPGP smartcard — I use a YubiKey NEO — does not work out of the box with GNOME though. I wrote about how to fix OpenPGP smartcards under GNOME with Debian 8.0 “Jessie” earlier, and I thought I’d do a similar blog post for Debian 9.0 “Stretch”. The situation is slightly different than before (e.g., GnuPG works better but SSH doesn’t) so there is some progress. May I hope that Debian 10.0 “Buster” gets this right? Pointers to which package in Debian should have a bug report tracking this issue is welcome (or a pointer to an existing bug report).

After first login, I attempt to use gpg --card-status to check if GnuPG can talk to the smartcard.

jas@latte:~$ gpg --card-status
gpg: error getting version from 'scdaemon': No SmartCard daemon
gpg: OpenPGP card not available: No SmartCard daemon
jas@latte:~$ 

This fails because scdaemon is not installed. Isn’t a smartcard common enough so that this should be installed by default on a GNOME Desktop Debian installation? Anyway, install it as follows.

root@latte:~# apt-get install scdaemon

Then try again.

jas@latte:~$ gpg --card-status
gpg: selecting openpgp failed: No such device
gpg: OpenPGP card not available: No such device
jas@latte:~$ 

I believe scdaemon here attempts to use its internal CCID implementation, and I do not know why it does not work. At this point I often recall that want pcscd installed since I work with smartcards in general.

root@latte:~# apt-get install pcscd

Now gpg --card-status works!

jas@latte:~$ gpg --card-status

Reader ...........: Yubico Yubikey NEO CCID 00 00
Application ID ...: D2760001240102000006017403230000
Version ..........: 2.0
Manufacturer .....: Yubico
Serial number ....: 01740323
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Sex ..............: male
URL of public key : https://josefsson.org/54265e8c.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 8358
Signature key ....: 9941 5CE1 905D 0E55 A9F8  8026 860B 7FBB 32F8 119D
      created ....: 2014-06-22 19:19:04
Encryption key....: DC9F 9B7D 8831 692A A852  D95B 9535 162A 78EC D86B
      created ....: 2014-06-22 19:19:20
Authentication key: 2E08 856F 4B22 2148 A40A  3E45 AF66 08D7 36BA 8F9B
      created ....: 2014-06-22 19:19:41
General key info..: sub  rsa2048/860B7FBB32F8119D 2014-06-22 Simon Josefsson 
sec#  rsa3744/0664A76954265E8C  created: 2014-06-22  expires: 2017-09-04
ssb>  rsa2048/860B7FBB32F8119D  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
ssb>  rsa2048/9535162A78ECD86B  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
ssb>  rsa2048/AF6608D736BA8F9B  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
jas@latte:~$ 

Using the key will not work though.

jas@latte:~$ echo foo|gpg -a --sign
gpg: no default secret key: No secret key
gpg: signing failed: No secret key
jas@latte:~$ 

This is because the public key and the secret key stub are not available.

jas@latte:~$ gpg --list-keys
jas@latte:~$ gpg --list-secret-keys
jas@latte:~$ 

You need to import the key for this to work. I have some vague memory that gpg --card-status was supposed to do this, but I may be wrong.

jas@latte:~$ gpg --recv-keys 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg: failed to start the dirmngr '/usr/bin/dirmngr': No such file or directory
gpg: connecting dirmngr at '/run/user/1000/gnupg/S.dirmngr' failed: No such file or directory
gpg: keyserver receive failed: No dirmngr
jas@latte:~$ 

Surprisingly, dirmngr is also not shipped by default so it has to be installed manually.

root@latte:~# apt-get install dirmngr

Below I proceed to trust the clouds to find my key.

jas@latte:~$ gpg --recv-keys 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg: key 0664A76954265E8C: public key "Simon Josefsson " imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1
jas@latte:~$ 

Now the public key and the secret key stub are available locally.

jas@latte:~$ gpg --list-keys
/home/jas/.gnupg/pubring.kbx
----------------------------
pub   rsa3744 2014-06-22 [SC] [expires: 2017-09-04]
      9AA9BDB11BB1B99A21285A330664A76954265E8C
uid           [ unknown] Simon Josefsson 
uid           [ unknown] Simon Josefsson 
sub   rsa2048 2014-06-22 [S] [expires: 2017-09-04]
sub   rsa2048 2014-06-22 [E] [expires: 2017-09-04]
sub   rsa2048 2014-06-22 [A] [expires: 2017-09-04]

jas@latte:~$ gpg --list-secret-keys
/home/jas/.gnupg/pubring.kbx
----------------------------
sec#  rsa3744 2014-06-22 [SC] [expires: 2017-09-04]
      9AA9BDB11BB1B99A21285A330664A76954265E8C
uid           [ unknown] Simon Josefsson 
uid           [ unknown] Simon Josefsson 
ssb>  rsa2048 2014-06-22 [S] [expires: 2017-09-04]
ssb>  rsa2048 2014-06-22 [E] [expires: 2017-09-04]
ssb>  rsa2048 2014-06-22 [A] [expires: 2017-09-04]

jas@latte:~$ 

I am now able to sign data with the smartcard, yay!

jas@latte:~$ echo foo|gpg -a --sign
-----BEGIN PGP MESSAGE-----

owGbwMvMwMHYxl2/2+iH4FzG01xJDJFu3+XT8vO5OhmNWRgYORhkxRRZZjrGPJwQ
yxe68keDGkwxKxNIJQMXpwBMRJGd/a98NMPJQt6jaoyO9yUVlmS7s7qm+Kjwr53G
uq9wQ+z+/kOdk9w4Q39+SMvc+mEV72kuH9WaW9bVqj80jN77hUbfTn5mffu2/aVL
h/IneTfaOQaukHij/P8A0//Phg/maWbONUjjySrl+a3tP8ll6/oeCd8g/aeTlH79
i0naanjW4bjv9wnvGuN+LPHLmhUc2zvZdyK3xttN/roHvsdX3f53yTAxeInvXZmd
x7W0/hVPX33Y4nT877T/ak4L057IBSavaPVcf4yhglVI8XuGgaTP666Wuslbliy4
5W5eLasbd33Xd/W0hTINznuz0kJ4r1bLHZW9fvjLduMPq5rS2co9tvW8nX9rhZ/D
zycu/QA=
=I8rt
-----END PGP MESSAGE-----
jas@latte:~$ 

Encrypting to myself will not work smoothly though.

jas@latte:~$ echo foo|gpg -a --encrypt -r simon@josefsson.org
gpg: 9535162A78ECD86B: There is no assurance this key belongs to the named user
sub  rsa2048/9535162A78ECD86B 2014-06-22 Simon Josefsson 
 Primary key fingerprint: 9AA9 BDB1 1BB1 B99A 2128  5A33 0664 A769 5426 5E8C
      Subkey fingerprint: DC9F 9B7D 8831 692A A852  D95B 9535 162A 78EC D86B

It is NOT certain that the key belongs to the person named
in the user ID.  If you *really* know what you are doing,
you may answer the next question with yes.

Use this key anyway? (y/N) 
gpg: signal Interrupt caught ... exiting

jas@latte:~$ 

The reason is that the newly imported key has unknown trust settings. I update the trust settings on my key to fix this, and encrypting now works without a prompt.

jas@latte:~$ gpg --edit-key 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg (GnuPG) 2.1.18; Copyright (C) 2017 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: unknown       validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 

gpg> trust
pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: unknown       validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 

Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)

  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu

Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y

pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: ultimate      validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 
Please note that the shown key validity is not necessarily correct
unless you restart the program.

gpg> quit
jas@latte:~$ echo foo|gpg -a --encrypt -r simon@josefsson.org
-----BEGIN PGP MESSAGE-----

hQEMA5U1Fip47NhrAQgArTvAykj/YRhWVuXb6nzeEigtlvKFSmGHmbNkJgF5+r1/
/hWENR72wsb1L0ROaLIjM3iIwNmyBURMiG+xV8ZE03VNbJdORW+S0fO6Ck4FaIj8
iL2/CXyp1obq1xCeYjdPf2nrz/P2Evu69s1K2/0i9y2KOK+0+u9fEGdAge8Gup6y
PWFDFkNj2YiVa383BqJ+kV51tfquw+T4y5MfVWBoHlhm46GgwjIxXiI+uBa655IM
EgwrONcZTbAWSV4/ShhR9ug9AzGIJgpu9x8k2i+yKcBsgAh/+d8v7joUaPRZlGIr
kim217hpA3/VLIFxTTkkm/BO1KWBlblxvVaL3RZDDNI5AVp0SASswqBqT3W5ew+K
nKdQ6UTMhEFe8xddsLjkI9+AzHfiuDCDxnxNgI1haI6obp9eeouGXUKG
=s6kt
-----END PGP MESSAGE-----
jas@latte:~$ 

So everything is fine, isn’t it? Alas, not quite.

jas@latte:~$ ssh-add -L
The agent has no identities.
jas@latte:~$ 

Tracking this down, I now realize that GNOME’s keyring is used for SSH but GnuPG’s gpg-agent is used for GnuPG. GnuPG uses the environment variable GPG_AGENT_INFO to connect to an agent, and SSH uses the SSH_AUTH_SOCK environment variable to find its agent. The filenames used below leak the knowledge that gpg-agent is used for GnuPG but GNOME keyring is used for SSH.

jas@latte:~$ echo $GPG_AGENT_INFO 
/run/user/1000/gnupg/S.gpg-agent:0:1
jas@latte:~$ echo $SSH_AUTH_SOCK 
/run/user/1000/keyring/ssh
jas@latte:~$ 

Here the same recipe as in my previous blog post works. This time GNOME keyring only has to be disabled for SSH. Disabling GNOME keyring is not sufficient, you also need gpg-agent to start with enable-ssh-support. The simplest way to achieve that is to add a line in ~/.gnupg/gpg-agent.conf as follows. When you login, the script /etc/X11/Xsession.d/90gpg-agent will set the environment variables GPG_AGENT_INFO and SSH_AUTH_SOCK. The latter variable is only set if enable-ssh-support is mentioned in the gpg-agent configuration.

jas@latte:~$ mkdir ~/.config/autostart
jas@latte:~$ cp /etc/xdg/autostart/gnome-keyring-ssh.desktop ~/.config/autostart/
jas@latte:~$ echo 'Hidden=true' >> ~/.config/autostart/gnome-keyring-ssh.desktop 
jas@latte:~$ echo enable-ssh-support >> ~/.gnupg/gpg-agent.conf 
jas@latte:~$ 

Log out from GNOME and log in again. Now you should see ssh-add -L working.

jas@latte:~$ ssh-add -L
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFP+UOTZJ+OXydpmbKmdGOVoJJz8se7lMs139T+TNLryk3EEWF+GqbB4VgzxzrGjwAMSjeQkAMb7Sbn+VpbJf1JDPFBHoYJQmg6CX4kFRaGZT6DHbYjgia59WkdkEYTtB7KPkbFWleo/RZT2u3f8eTedrP7dhSX0azN0lDuu/wBrwedzSV+AiPr10rQaCTp1V8sKbhz5ryOXHQW0Gcps6JraRzMW+ooKFX3lPq0pZa7qL9F6sE4sDFvtOdbRJoZS1b88aZrENGx8KSrcMzARq9UBn1plsEG4/3BRv/BgHHaF+d97by52R0VVyIXpLlkdp1Uk4D9cQptgaH4UAyI1vr cardno:000601740323
jas@latte:~$ 

Topics for further discussion or research include 1) whether scdaemon, dirmngr and/or pcscd should be pre-installed on Debian desktop systems; 2) whether gpg --card-status should attempt to import the public key and secret key stub automatically; 3) why GNOME keyring is used by default for SSH rather than gpg-agent; 4) whether GNOME keyring should support smartcards, or if it is better to always use gpg-agent for GnuPG/SSH, 5) if something could/should be done to automatically infer the trust setting for a secret key.

Enjoy!

Let’s Encrypt Clients

As many others, I have been following the launch of Let’s Encrypt. Let’s Encrypt is a new zero-cost X.509 Certificate Authority that supports the Automated Certificate Management Environment (ACME) protocol. ACME allow you to automate creation and retrieval of HTTPS server certificates. As anyone who has maintained a number of HTTPS servers can attest, this process has unfortunately been manual, error-prone and differ between CAs.

On some of my personal domains, such as this blog.josefsson.org, I have been using the CACert authority to sign the HTTPS server certificate. The problem with CACert is that the CACert trust anchors aren’t shipped with sufficient many operating systems and web browsers. The user experience is similar to reaching a self-signed server certificate. For organization-internal servers that you don’t want to trust external parties for, I continue to believe that running your own CA and distributing it to your users is better than using a public CA (compare my XMPP server certificate setup). But for public servers, availability without prior configuration is more important. Therefor I decided that my public HTTPS servers should use a CA/Browser Forum-approved CA with support for ACME, and as long as Let’s Encrypt is trustworthy and zero-cost, they are a good choice.

I was in need of a free software ACME client, and set out to research what’s out there. Unfortunately, I did not find any web pages that listed the available options and compared them. The Let’s Encrypt CA points to the “official” Let’s Encrypt client, written by Jakub Warmuz, James Kasten, Peter Eckersley and several others. The manual contain pointers to two other clients in a seamingly unrelated section. Those clients are letsencrypt-nosudo by Daniel Roesler et al, and simp_le by (again!) Jakub Warmuz. From the letsencrypt.org’s client-dev mailing list I also found letsencrypt.sh by Gerhard Heift and LetsEncryptShell by Jan Mojžíš. Is anyone aware of other ACME clients?

By comparing these clients, I learned what I did not like in them. I wanted something small so that I can audit it. I want something that doesn’t require root access. Preferably, it should be able to run on my laptop, since I wasn’t ready to run something on the servers. Generally, it has to be Secure, which implies something about how it approaches private key handling. The letsencrypt official client can do everything, and has plugin for various server software to automate the ACME negotiation. All the cryptographic operations appear to be hidden inside the client, which usually means it is not flexible. I really did not like how it was designed, it looks like your typical monolithic proof-of-concept design. The simp_le client looked much cleaner, and gave me a good feeling. The letsencrypt.sh client is simple and written in /bin/sh shell script, but it appeared a bit too simplistic. The LetsEncryptShell looked decent, but I wanted something more automated.

What all of these clients did not have, and that letsencrypt-nosudo client had, was the ability to let me do the private-key operations. All the operations are done interactively on the command-line using OpenSSL. This would allow me to put the ACME user private key, and the HTTPS private key, on a YubiKey, using its PIV applet and techniques similar to what I used to create my SSH host CA. While the HTTPS private key has to be available on the HTTPS server (used to setup TLS connections), I wouldn’t want the ACME user private key to be available there. Similarily, I wouldn’t want to have the ACME or the HTTPS private key on my laptop. The letsencrypt-nosudo tool is otherwise more rough around the edges than the more cleaner simp_le client. However the private key handling aspect was the deciding matter for me.

After fixing some hard-coded limitations on RSA key sizes, getting the cert was as simple as following the letsencrypt-nosudo instructions. I’ll follow up with a later post describing how to put the ACME user private key and the HTTPS server certificate private key on a YubiKey and how to use that with letsencrypt-nosudo.

So you can now enjoy browsing my blog over HTTPS! Thank you Let’s Encrypt!

Automatic Replicant Backup over USB using rsync

I have been using Replicant on the Samsung SIII I9300 for over two years. I have written before on taking a backup of the phone using rsync but recently I automated my setup as described below. This work was prompted by a screen accident with my phone that caused it to die, and I noticed that I hadn’t taken regular backups. I did not lose any data this time, since typically all content I create on the device is immediately synchronized to my clouds. Photos are uploaded by the ownCloud app, SMS Backup+ saves SMS and call logs to my IMAP server, and I use DAVDroid for synchronizing contacts, calendar and task lists with my instance of ownCloud. Still, I strongly believe in regular backups of everything, so it was time to automate this.

For my use-case, taking backups of the phone whenever I connect it to one of my laptops is sufficient. I typically connect it to my laptops for charging at least every other day. My laptops are all running Debian, but this should be applicable to most modern GNU/Linux system. This is not Replicant-specific, although you need a rooted phone. I thought that automating this would be simple, but I got to learn the ins and outs of systemd and udev in the process and this ended up taking the better part of an evening.

I started out adding an udev rule and a small script, thinking I could invoke the backup process from the udev rule. However rsync would magically die after running a few seconds. After an embarrassing long debugging session, finally I found someone with a similar problem which led me to a nice writeup on the topic of running long-running services on udev events. I created a file /etc/udev/rules.d/99-android-backup.rules with the following content:

ACTION=="add", SUBSYSTEMS=="usb", ENV{ID_SERIAL_SHORT}=="323048a5ae82918b", TAG+="systemd", ENV{SYSTEMD_WANTS}+="android-backup@$env{ID_SERIAL_SHORT}.service"
ACTION=="add", SUBSYSTEMS=="usb", ENV{ID_SERIAL_SHORT}=="4df9e09c25e75f63", TAG+="systemd", ENV{SYSTEMD_WANTS}+="android-backup@$env{ID_SERIAL_SHORT}.service"

The serial numbers correspond to the device serial numbers of the two devices I wish to backup. The adb devices command will print them for you, and you need to replace my values with the values from your phones. Next I created a systemd service to describe a oneshot service. The file /etc/systemd/system/android-backup@.service have the following content:

[Service]
Type=oneshot
ExecStart=/usr/local/sbin/android-backup %I

The at-sign (“@”) in the service filename signal that this is a service that takes a parameter. I’m not enough of an udev/systemd person to explain these two files using the proper terminology, but at least you can pattern-match and follow the basic idea of them: the udev rule matches the devices that I’m interested in (I don’t want this to happen to all random Android devices I attach, hence matching against known serial numbers), and it causes a systemd service with a parameter to be started. The systemd service file describe the script to run, and passes on the parameter.

Now for the juicy part, the script. I have /usr/local/sbin/android-backup with the following content.

#!/bin/bash

DIRBASE=/var/backups/android
export ANDROID_SERIAL="$1"

exec 2>&1 | logger

if ! test -d "$DIRBASE-$ANDROID_SERIAL"; then
    echo "could not find directory: $DIRBASE-$ANDROID_SERIAL"
    exit 1
fi

set -x

adb wait-for-device
adb root
adb wait-for-device
adb shell printf "address 127.0.0.1\nuid = root\ngid = root\n[root]\n\tpath = /\n" \> /mnt/secure/rsyncd.conf
adb shell rsync --daemon --no-detach --config=/mnt/secure/rsyncd.conf &
adb forward tcp:6010 tcp:873
sleep 2
rsync -av --delete --exclude /dev --exclude /acct --exclude /sys --exclude /proc rsync://localhost:6010/root/ $DIRBASE-$ANDROID_SERIAL/
: rc $?
adb forward --remove tcp:6010
adb shell rm -f /mnt/secure/rsyncd.conf

This script warrant more detailed explanation. Backups are placed under, e.g., /var/backups/android-323048a5ae82918b/ for later off-site backup (you do backup your laptop, right?). You have to manually create this directory, as a safety catch to not wildly rsync data into non-existing directories. The script logs everything using syslog, so run a tail -F /var/log/syslog& when setting this up. You may want to reduce verbosity of rsync if you prefer (replace rsync -av with rsync -a). The script runs adb wait-for-device which you rightly guessed will wait for the device to settle. Next adb root is invoked to get root on the device (reading all files from the system naturally requires root). It takes some time to switch, so another wait-for-device call is needed. Next a small rsyncd configuration file is created in /mnt/secure/rsyncd.conf on the phone. The file tells rsync do listen on localhost, run as root, and use / as the path. By default, rsyncd is read-only so the host will not be able to upload any data over rsync, just read data out. Next rsync is started on the phone. The adb forward command forwards port 6010 on the laptop to port 873 on the phone (873 is the default rsyncd port). Unfortunately, setting up the TCP forward appears to take some time, and adb wait-for-device will not wait for that to complete, hence an ugly sleep 2 at this point. Next is the rsync invocation itself, which just pulls in everything from the phone to the laptop, excluding some usual suspects. The somewhat cryptic : rc $? merely logs the exit code of the rsync process into syslog. Finally we clean up the TCP forward and remove the rsyncd.conf file that was temporarily created.

This setup appears stable to me. I can plug in a phone and a backup will be taken. I can even plug in both my devices at the same time, and they will run at the same time. If I unplug a device, the script or rsync will error out and systemd cleans up.

If anyone has ideas on how to avoid the ugly temporary rsyncd.conf file or the ugly sleep 2, I’m interested. It would also be nice to not have to do the ‘adb root’ dance, and instead have the phone start the rsync daemon when connecting to my laptop somehow. TCP forwarding might be troublesome on a multi-user system, but my laptops aren’t. Killing rsync on the phone is probably a good idea too. If you have ideas on how to fix any of this, other feedback, or questions, please let me know!

Combining Dnsmasq and Unbound

For my home office network I have been using Dnsmasq for some time. Dnsmasq provides me with DNS, DHCP, DHCPv6, and IPv6 Router Advertisement. I run dnsmasq on a Debian Jessie server, but it works similar with OpenWRT if you want to use a smaller device. My entire /etc/dnsmasq.d/local configuration used to look like this:

dhcp-authoritative
interface=eth1
read-ethers
dhcp-range=192.168.1.100,192.168.1.150,12h
dhcp-range=2001:9b0:104:42::100,2001:9b0:104:42::1500
dhcp-option=option6:dns-server,[::]
enable-ra

Here dhcp-authoritative enable DHCP. interface=eth1 says to listen on eth1 only, which is my internal (IPv4 NAT) network. I try to keep track of the MAC address of all my devices in a /etc/ethers file, so I use read-ethers to have dnsmasq give stable IP addresses for them. The dhcp-range is used to enable DHCP and DHCPv6 on my internal network. The dhcp-option=option6:dns-server,[::] statement is needed to inform the DHCP clients of the DNS resolver’s IPv6 address, otherwise they would only get the IPv4 DNS server address. The enable-ra parameter enables IPv6 router advertisement on the internal network, thereby removing the need to run radvd too — useful since I prefer to use copyleft software.

Recently I had a desire to use DNSSEC, and enabled it in Dnsmasq using the following statements:

dnssec
trust-anchor=.,19036,8,2,49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5
dnssec-check-unsigned

The dnssec keyword enable DNSSEC validation in dnsmasq, using the indicated trust-anchor (get the root-anchors from IANA). The dnssec-check-unsigned deserves some more discussion. The dnsmasq manpage describes it as follows:

As a default, dnsmasq does not check that unsigned DNS replies are legitimate: they are assumed to be valid and passed on (without the “authentic data” bit set, of course). This does not protect against an attacker forging unsigned replies for signed DNS zones, but it is fast. If this flag is set, dnsmasq will check the zones of unsigned replies, to ensure that unsigned replies are allowed in those zones. The cost of this is more upstream queries and slower performance.

For example, this means that dnsmasq’s DNSSEC functionality is not secure against active man-in-the-middle attacks between dnsmasq and the DNS server it is using. Even if example.org used DNSSEC properly, an attacker could fake that it was unsigned to dnsmasq, and I would get potentially incorrect values in return. We all know that the Internet is not a secure place, and your threat model should include active attackers. I believe this mode should be the default in dnsmasq, and users should have to configure dnsmasq to not be in that mode if they really want to (with the obvious security warning).

Running with this enabled for a couple of days resulted in frustration about not being able to reach a couple of domains. The behaviour was that my clients would hang indefinitely or get a SERVFAIL, both resulting in lack of service. You can enable query logging in dnsmasq with log-queries and enabling this I noticed three distinct form of error patterns:

jow13gw dnsmasq 460 - -  forwarded www.fritidsresor.se to 213.80.101.3
jow13gw dnsmasq 460 - -  validation result is BOGUS
jow13gw dnsmasq 547 - -  reply cloudflare-dnssec.net is BOGUS DNSKEY
jow13gw dnsmasq 547 - -  validation result is BOGUS
jow13gw dnsmasq 547 - -  reply linux.conf.au is BOGUS DS
jow13gw dnsmasq 547 - -  validation result is BOGUS

The first only happened intermittently, the second did not cause any noticeable problem, and the final one was reproducible. To be fair, I only found the last example after starting to search for problem reports (see post confirming bug).

At this point, I had a confirmed bug in dnsmasq that affect my use-case. I want to use official packages from Debian on this machine, so installing newer versions manually is not an option. So I started to look into alternatives for DNS resolving, and quickly found Unbound. Installing it was easy:

apt-get install unbound
unbound-control-setup 

I created a local configuration file in /etc/unbound/unbound.conf.d/local.conf as follows:

server:
	interface: 127.0.0.1
	interface: ::1
	interface: 192.168.1.2
	interface: 2001:9b0:104:42::2
	access-control: 127.0.0.1 allow
	access-control: ::1 allow
	access-control: 192.168.1.2/24 allow
	access-control: 2001:9b0:104:42::2/64 allow
	outgoing-interface: 155.4.17.2
	outgoing-interface: 2001:9b0:1:1a04::2
#	log-queries: yes
#	verbosity: 2

The interface keyword determine which IP addresses to listen on, here I used the loopback interface and the local address of the physical network interface for my internal network. The access-control allows recursive DNS resolving from those networks. And outgoing-interface specify my external Internet-connected interface. log-queries and/or verbosity are useful for debugging.

To make things work, dnsmasq has to stop providing DNS services. This can be achieved with the port=0 keyword, however that will also disable informing DHCP clients about the DNS server to use. So this has to be added in manually. I ended up adding the two following lines to /etc/dnsmasq.d/local:

port=0
dhcp-option=option:dns-server,192.168.1.2

Restarting unbound and dnsmasq now leads to working (and secure) internal DNSSEC-aware name resolution over both IPv4 and IPv6. I can verify that resolution works, and that Unbound verify signatures and reject bad domains properly with dig as below, or use online DNSSEC resolver test page although I’m not sure how confident you can be in the result from that page.

$ host linux.conf.au
linux.conf.au has address 192.55.98.190
linux.conf.au mail is handled by 1 linux.org.au.
$ host sigfail.verteiltesysteme.net
;; connection timed out; no servers could be reached
$ 

I use Munin to monitor my services, and I was happy to find a nice Unbound Munin plugin. I installed the file in /usr/share/munin/plugins/ and created a Munin plugin configuration file /etc/munin/plugin-conf.d/unbound as follows:

[unbound*]
user root
env.statefile /var/lib/munin-node/plugin-state/root/unbound.state
env.unbound_conf /etc/unbound/unbound.conf
env.unbound_control /usr/sbin/unbound-control
env.spoof_warn 1000
env.spoof_crit 100000

I run munin-node-configure --shell|sh to enable it. To work unbound has to be configured as well, so I create a /etc/unbound/unbound.conf.d/munin.conf as follows.

server:
	extended-statistics: yes
	statistics-cumulative: no
	statistics-interval: 0
remote-control:
	control-enable: yes

The graphs may be viewed at my munin instance.

Laptop decision fatigue

I admit defeat. I have made some effort into researching recent laptop models (see first and second post). Last week I asked myself what the biggest problem with my current 4+ year old X201 is. I couldn’t articulate any significant concern. So I have bought another second-hand X201 for semi-permanent use at my second office. At ~225 USD/EUR, including another docking station, it is an amazing value. I considered the X220-X240 but they have a different docking station, and were roughly twice the price — the latter allowed for a Samsung 850 PRO SSD purchase. Thanks everyone for your advice, anyway!

Laptop indecision

I wrote last month about buying a new laptop and I still haven’t made a decision. One reason for this is because Dell doesn’t seem to be shipping the E7250. Some online shops claim to be able to deliver it, but aren’t clear on what configuration it has – and I really don’t want to end up with Dell Wifi.

Another issue has been the graphic issues with the Broadwell GPU (see the comment section of my last post). It seems unlikely that this will be fixed in time for Debian Jessie. I really want a stable OS on this machine, as it will be a work-horse and not a toy machine. I haven’t made up my mind whether the graphics issue is a deal-breaker for me.

Meanwhile, a couple of more sub-1.5kg (sub-3.3lbs) Broadwell i7’s have hit the market. Some of these models were suggested in comments to my last post. I have decided that the 5500U CPU would also be acceptable to me, because some newer laptops doesn’t come with the 5600U. The difference is that the 5500U is a bit slower (say 5-10%) and lacks vPro, which I have no need for and mostly consider a security risk. I’m not aware of any other feature differences.

Since the last round, I have tightened my weight requirement to be sub-1.4kg (sub-3lbs), which excludes some recently introduced models, and actually excludes most of the models I looked at before (X250, X1 Carbon, HP 1040/810). Since I’m leaning towards the E7250, with the X250 as a “reliable” fallback option, I wanted to cut down on the number of further models to consider. Weigth is a simple distinguisher. The 1.4-1.5kg (3-3.3lbs) models I am aware that of that is excluded are the Asus Zenbook UX303LN, the HP Spectre X360, and the Acer TravelMate P645.

The Acer Aspire S7-393 (1.3kg) and Toshiba Kira-107 (1.26kg) would have been options if they had RJ45 ports. They may be interesting to consider for others.

The new models I am aware of are below. I’m including the E7250 and X250 for comparison, since they are my preferred choices from the first round. A column for maximum RAM is added too, since this may be a deciding factor for me. Higher weigth is with touch screens.

Toshiba Z30-B 1.2-1.34kg 16GB 13.3″ 1920×1080
Fujitsu Lifebook S935 1.24-1.36kg 12GB 13.3″ 1920×1080
HP EliteBook 820 G2 1.34-1.52kg 16GB 12.5″ 1920×1080
Dell Latitude E7250 1.25kg 8/16GB? 12.5″ 1366×768
Lenovo X250 1.42kg 8GB 12.5″ 1366×768

It appears unclear whether the E7250 is memory upgradeable, some sites say max 8GB some say max 16GB. The X250 and 820 has DisplayPort, the S935 and Z30-B has HDMI, and the E7250 has both DisplayPort/HDMI. The E7250 does not have VGA which the rest has. All of them have 3 USB 3.0 ports except for X250 that only has 2 ports. The E7250 and 820 claims NFC support, but Debian support is not given. Interestingly, all of them have a smartcard reader. All support SDXC memory cards.

The S935 has an interesting modular bay which can actually fit a CD reader or an additional battery. There is a detailed QuickSpec PDF for the HP 820 G2, haven’t found similar detailed information for the other models. It mentions support for Ubuntu, which is nice.

Comparing these laptops is really just academic until I have decided what to think about the Broadwell GPU issues. It may be that I’ll go back to a fourth-gen i7 laptop, and then I’ll probably pick a cheap reliable machine such as the X240.

Laptop Buying Advice?

My current Lenovo X201 laptop has been with me for over four years. I’ve been looking at new laptop models over the years thinking that I should upgrade. Every time, after checking performance numbers, I’ve always reached the conclusion that it is not worth it. The most performant Intel Broadwell processor is the the Core i7 5600U and it is only about 1.5 times the performance of my current Intel Core i7 620M. Meanwhile disk performance has increased more rapidly, but changing the disk on a laptop is usually simple. Two years ago I upgraded to the Samsung 840 Pro 256GB disk, and this year I swapped that for the Samsung 850 Pro 1TB, and both have been good investments.

Recently my laptop usage patterns have changed slightly, and instead of carrying one laptop around, I have decided to aim for multiple semi-permanent laptops at different locations, coupled with a mobile device that right now is just my phone. The X201 will remain one of my normal work machines.

What remains is to decide on a new laptop, and there begins the fun. My requirements are relatively easy to summarize. The laptop will run a GNU/Linux distribution like Debian, so it has to work well with it. I’ve decided that my preferred CPU is the Intel Core i7 5600U. The screen size, keyboard and mouse is mostly irrelevant as I never work longer periods of time directly on the laptop. Even though the laptop will be semi-permanent, I know there will be times when I take it with me. Thus it has to be as lightweight as possible. If there would be significant advantages in going with a heavier laptop, I might reconsider this, but as far as I can see the only advantage with a heavier machine is bigger/better screen, keyboard (all of which I find irrelevant) and maximum memory capacity (which I would find useful, but not enough of an argument for me). The sub-1.5kg laptops with the 5600U CPU on the market that I have found are:

Lenovo X250 1.42kg 12.5″ 1366×768
Lenovo X1 Carbon (3rd gen) 1.34kg 14″ 2560×1440
Dell Latitude E7250 1.25kg 12.5″ 1366×768
Dell XPS 13 1.26kg 13.3″ 3200×1800
HP EliteBook Folio 1040 G2 1.49kg 14″ 1920×1080
HP EliteBook Revolve 810 G3 1.4kg 11.6″ 1366×768

I find it interesting that Lenovo, Dell and HP each have two models that meets my 5600U/sub-1.5kg criteria. Regarding screen, possibly there exists models with other screen resolutions. The XPS 13, HP 810 and X1 models I looked had touch screens, the others did not. As screen is not important to me, I didn’t evaluate this further.

I think all of them would suffice, and there are only subtle differences. All except the XPS 13 can be connected to peripherals using one cable, which I find convenient to avoid a cable mess. All of them have DisplayPort, but HP uses DisplayPort Standard and the rest uses miniDP. The E7250 and X1 have HDMI output. The X250 boosts a 15-pin VGA connector, none of the others have it — I’m not sure if that is a advantage or disadvantage these days. All of them have 2 USB v3.0 ports except the E7250 which has 3 ports. The HP 1040, XPS 13 and X1 Carbon do not have RJ45 Ethernet connectors, which is a significant disadvantage to me. Ironically, only the smallest one of these, the HP 810, can be memory upgraded to 12GB with the others being stuck at 8GB. HP and the E7250 supports NFC, although Debian support is not certain. The E7250 and X250 have a smartcard reader, and again, Debian support is not certain. The X1, X250 and 810 have a 3G/4G card.

Right now, I’m leaning towards rejecting the XPS 13, X1 and HP 1040 because of lack of RJ45 ethernet port. That leaves me with the E7250, X250 and the 810. Of these, the E7250 seems like the winner: lightest, 1 extra USB port, HDMI, NFC, SmartCard-reader. However, it has no 3G/4G-card and no memory upgrade options. Looking for compatibility problems, it seems you have to be careful to not end up with the “Dell Wireless” card and the E7250 appears to come in a docking and non-docking variant but I’m not sure what that means.

Are there other models I should consider? Other thoughts?