OpenPGP smartcard under GNOME on Debian 9.0 Stretch

I installed Debian 9.0 “Stretch” on my Lenovo X201 laptop today. Installation went smooth, as usual. GnuPG/SSH with an OpenPGP smartcard — I use a YubiKey NEO — does not work out of the box with GNOME though. I wrote about how to fix OpenPGP smartcards under GNOME with Debian 8.0 “Jessie” earlier, and I thought I’d do a similar blog post for Debian 9.0 “Stretch”. The situation is slightly different than before (e.g., GnuPG works better but SSH doesn’t) so there is some progress. May I hope that Debian 10.0 “Buster” gets this right? Pointers to which package in Debian should have a bug report tracking this issue is welcome (or a pointer to an existing bug report).

After first login, I attempt to use gpg --card-status to check if GnuPG can talk to the smartcard.

jas@latte:~$ gpg --card-status
gpg: error getting version from 'scdaemon': No SmartCard daemon
gpg: OpenPGP card not available: No SmartCard daemon
jas@latte:~$ 

This fails because scdaemon is not installed. Isn’t a smartcard common enough so that this should be installed by default on a GNOME Desktop Debian installation? Anyway, install it as follows.

root@latte:~# apt-get install scdaemon

Then try again.

jas@latte:~$ gpg --card-status
gpg: selecting openpgp failed: No such device
gpg: OpenPGP card not available: No such device
jas@latte:~$ 

I believe scdaemon here attempts to use its internal CCID implementation, and I do not know why it does not work. At this point I often recall that want pcscd installed since I work with smartcards in general.

root@latte:~# apt-get install pcscd

Now gpg --card-status works!

jas@latte:~$ gpg --card-status

Reader ...........: Yubico Yubikey NEO CCID 00 00
Application ID ...: D2760001240102000006017403230000
Version ..........: 2.0
Manufacturer .....: Yubico
Serial number ....: 01740323
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Sex ..............: male
URL of public key : https://josefsson.org/54265e8c.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 8358
Signature key ....: 9941 5CE1 905D 0E55 A9F8  8026 860B 7FBB 32F8 119D
      created ....: 2014-06-22 19:19:04
Encryption key....: DC9F 9B7D 8831 692A A852  D95B 9535 162A 78EC D86B
      created ....: 2014-06-22 19:19:20
Authentication key: 2E08 856F 4B22 2148 A40A  3E45 AF66 08D7 36BA 8F9B
      created ....: 2014-06-22 19:19:41
General key info..: sub  rsa2048/860B7FBB32F8119D 2014-06-22 Simon Josefsson 
sec#  rsa3744/0664A76954265E8C  created: 2014-06-22  expires: 2017-09-04
ssb>  rsa2048/860B7FBB32F8119D  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
ssb>  rsa2048/9535162A78ECD86B  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
ssb>  rsa2048/AF6608D736BA8F9B  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
jas@latte:~$ 

Using the key will not work though.

jas@latte:~$ echo foo|gpg -a --sign
gpg: no default secret key: No secret key
gpg: signing failed: No secret key
jas@latte:~$ 

This is because the public key and the secret key stub are not available.

jas@latte:~$ gpg --list-keys
jas@latte:~$ gpg --list-secret-keys
jas@latte:~$ 

You need to import the key for this to work. I have some vague memory that gpg --card-status was supposed to do this, but I may be wrong.

jas@latte:~$ gpg --recv-keys 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg: failed to start the dirmngr '/usr/bin/dirmngr': No such file or directory
gpg: connecting dirmngr at '/run/user/1000/gnupg/S.dirmngr' failed: No such file or directory
gpg: keyserver receive failed: No dirmngr
jas@latte:~$ 

Surprisingly, dirmngr is also not shipped by default so it has to be installed manually.

root@latte:~# apt-get install dirmngr

Below I proceed to trust the clouds to find my key.

jas@latte:~$ gpg --recv-keys 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg: key 0664A76954265E8C: public key "Simon Josefsson " imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1
jas@latte:~$ 

Now the public key and the secret key stub are available locally.

jas@latte:~$ gpg --list-keys
/home/jas/.gnupg/pubring.kbx
----------------------------
pub   rsa3744 2014-06-22 [SC] [expires: 2017-09-04]
      9AA9BDB11BB1B99A21285A330664A76954265E8C
uid           [ unknown] Simon Josefsson 
uid           [ unknown] Simon Josefsson 
sub   rsa2048 2014-06-22 [S] [expires: 2017-09-04]
sub   rsa2048 2014-06-22 [E] [expires: 2017-09-04]
sub   rsa2048 2014-06-22 [A] [expires: 2017-09-04]

jas@latte:~$ gpg --list-secret-keys
/home/jas/.gnupg/pubring.kbx
----------------------------
sec#  rsa3744 2014-06-22 [SC] [expires: 2017-09-04]
      9AA9BDB11BB1B99A21285A330664A76954265E8C
uid           [ unknown] Simon Josefsson 
uid           [ unknown] Simon Josefsson 
ssb>  rsa2048 2014-06-22 [S] [expires: 2017-09-04]
ssb>  rsa2048 2014-06-22 [E] [expires: 2017-09-04]
ssb>  rsa2048 2014-06-22 [A] [expires: 2017-09-04]

jas@latte:~$ 

I am now able to sign data with the smartcard, yay!

jas@latte:~$ echo foo|gpg -a --sign
-----BEGIN PGP MESSAGE-----

owGbwMvMwMHYxl2/2+iH4FzG01xJDJFu3+XT8vO5OhmNWRgYORhkxRRZZjrGPJwQ
yxe68keDGkwxKxNIJQMXpwBMRJGd/a98NMPJQt6jaoyO9yUVlmS7s7qm+Kjwr53G
uq9wQ+z+/kOdk9w4Q39+SMvc+mEV72kuH9WaW9bVqj80jN77hUbfTn5mffu2/aVL
h/IneTfaOQaukHij/P8A0//Phg/maWbONUjjySrl+a3tP8ll6/oeCd8g/aeTlH79
i0naanjW4bjv9wnvGuN+LPHLmhUc2zvZdyK3xttN/roHvsdX3f53yTAxeInvXZmd
x7W0/hVPX33Y4nT877T/ak4L057IBSavaPVcf4yhglVI8XuGgaTP666Wuslbliy4
5W5eLasbd33Xd/W0hTINznuz0kJ4r1bLHZW9fvjLduMPq5rS2co9tvW8nX9rhZ/D
zycu/QA=
=I8rt
-----END PGP MESSAGE-----
jas@latte:~$ 

Encrypting to myself will not work smoothly though.

jas@latte:~$ echo foo|gpg -a --encrypt -r simon@josefsson.org
gpg: 9535162A78ECD86B: There is no assurance this key belongs to the named user
sub  rsa2048/9535162A78ECD86B 2014-06-22 Simon Josefsson 
 Primary key fingerprint: 9AA9 BDB1 1BB1 B99A 2128  5A33 0664 A769 5426 5E8C
      Subkey fingerprint: DC9F 9B7D 8831 692A A852  D95B 9535 162A 78EC D86B

It is NOT certain that the key belongs to the person named
in the user ID.  If you *really* know what you are doing,
you may answer the next question with yes.

Use this key anyway? (y/N) 
gpg: signal Interrupt caught ... exiting

jas@latte:~$ 

The reason is that the newly imported key has unknown trust settings. I update the trust settings on my key to fix this, and encrypting now works without a prompt.

jas@latte:~$ gpg --edit-key 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg (GnuPG) 2.1.18; Copyright (C) 2017 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: unknown       validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 

gpg> trust
pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: unknown       validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 

Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)

  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu

Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y

pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: ultimate      validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 
Please note that the shown key validity is not necessarily correct
unless you restart the program.

gpg> quit
jas@latte:~$ echo foo|gpg -a --encrypt -r simon@josefsson.org
-----BEGIN PGP MESSAGE-----

hQEMA5U1Fip47NhrAQgArTvAykj/YRhWVuXb6nzeEigtlvKFSmGHmbNkJgF5+r1/
/hWENR72wsb1L0ROaLIjM3iIwNmyBURMiG+xV8ZE03VNbJdORW+S0fO6Ck4FaIj8
iL2/CXyp1obq1xCeYjdPf2nrz/P2Evu69s1K2/0i9y2KOK+0+u9fEGdAge8Gup6y
PWFDFkNj2YiVa383BqJ+kV51tfquw+T4y5MfVWBoHlhm46GgwjIxXiI+uBa655IM
EgwrONcZTbAWSV4/ShhR9ug9AzGIJgpu9x8k2i+yKcBsgAh/+d8v7joUaPRZlGIr
kim217hpA3/VLIFxTTkkm/BO1KWBlblxvVaL3RZDDNI5AVp0SASswqBqT3W5ew+K
nKdQ6UTMhEFe8xddsLjkI9+AzHfiuDCDxnxNgI1haI6obp9eeouGXUKG
=s6kt
-----END PGP MESSAGE-----
jas@latte:~$ 

So everything is fine, isn’t it? Alas, not quite.

jas@latte:~$ ssh-add -L
The agent has no identities.
jas@latte:~$ 

Tracking this down, I now realize that GNOME’s keyring is used for SSH but GnuPG’s gpg-agent is used for GnuPG. GnuPG uses the environment variable GPG_AGENT_INFO to connect to an agent, and SSH uses the SSH_AUTH_SOCK environment variable to find its agent. The filenames used below leak the knowledge that gpg-agent is used for GnuPG but GNOME keyring is used for SSH.

jas@latte:~$ echo $GPG_AGENT_INFO 
/run/user/1000/gnupg/S.gpg-agent:0:1
jas@latte:~$ echo $SSH_AUTH_SOCK 
/run/user/1000/keyring/ssh
jas@latte:~$ 

Here the same recipe as in my previous blog post works. This time GNOME keyring only has to be disabled for SSH. Disabling GNOME keyring is not sufficient, you also need gpg-agent to start with enable-ssh-support. The simplest way to achieve that is to add a line in ~/.gnupg/gpg-agent.conf as follows. When you login, the script /etc/X11/Xsession.d/90gpg-agent will set the environment variables GPG_AGENT_INFO and SSH_AUTH_SOCK. The latter variable is only set if enable-ssh-support is mentioned in the gpg-agent configuration.

jas@latte:~$ mkdir ~/.config/autostart
jas@latte:~$ cp /etc/xdg/autostart/gnome-keyring-ssh.desktop ~/.config/autostart/
jas@latte:~$ echo 'Hidden=true' >> ~/.config/autostart/gnome-keyring-ssh.desktop 
jas@latte:~$ echo enable-ssh-support >> ~/.gnupg/gpg-agent.conf 
jas@latte:~$ 

Log out from GNOME and log in again. Now you should see ssh-add -L working.

jas@latte:~$ ssh-add -L
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFP+UOTZJ+OXydpmbKmdGOVoJJz8se7lMs139T+TNLryk3EEWF+GqbB4VgzxzrGjwAMSjeQkAMb7Sbn+VpbJf1JDPFBHoYJQmg6CX4kFRaGZT6DHbYjgia59WkdkEYTtB7KPkbFWleo/RZT2u3f8eTedrP7dhSX0azN0lDuu/wBrwedzSV+AiPr10rQaCTp1V8sKbhz5ryOXHQW0Gcps6JraRzMW+ooKFX3lPq0pZa7qL9F6sE4sDFvtOdbRJoZS1b88aZrENGx8KSrcMzARq9UBn1plsEG4/3BRv/BgHHaF+d97by52R0VVyIXpLlkdp1Uk4D9cQptgaH4UAyI1vr cardno:000601740323
jas@latte:~$ 

Topics for further discussion or research include 1) whether scdaemon, dirmngr and/or pcscd should be pre-installed on Debian desktop systems; 2) whether gpg --card-status should attempt to import the public key and secret key stub automatically; 3) why GNOME keyring is used by default for SSH rather than gpg-agent; 4) whether GNOME keyring should support smartcards, or if it is better to always use gpg-agent for GnuPG/SSH, 5) if something could/should be done to automatically infer the trust setting for a secret key.

Enjoy!

GPS on Replicant 6

I use Replicant on my main Samsung S3 mobile phone. Replicant is a fully free Android distribution. One consequence of the “fully free” means that some functionality is not working properly, because the hardware requires non-free software. I am in the process of upgrading my main phone to the latest beta builds of Replicant 6. Getting GPS to work on Replicant/S3 is not that difficult. I have made the decision that I am willing to compromise on freedom a bit for my Geocaching hobby. I have written before how to get GPS to work on Replicant 4.0 and GPS on Replicant 4.2. When I upgraded to Wolfgang’s Replicant 6 build back in September 2016, it took some time to figure out how to get GPS to work. I prepared notes on non-free firmware on Replicant 6 which included a section on getting GPS to work. Unfortunately, that method requires that you build your own image and has access to the build tree. Which is not for everyone. This writeup explains how to get GPS to work on a Replicant 6 without building your own image. Wolfgang already explained how to add all other non-free firmware to Replicant 6 but it did not cover GPS. The reason is that GPS requires non-free software to run on your main CPU. You should understand the consequences of this before proceeding!

The first step is to download a Replicant 6.0 image, currently they are available from the replicant 6.0 forum thread. Download the replicant-6.0-i9300.zip file and flash it to your phone as usual. Make sure everything (except GPS of course) works, after loading other non-free firmware (Wifi, Bluetooth etc) using "./firmwares.sh i9300 all" that you may want. You can install the Geocaching client c:geo via fdroid by adding fdroid.cgeo.org as a separate repository. Start the app and verify that GPS does not work. Keep the replicant-6.0-i9300.zip file around, you will need it later.

The tricky part about GPS is that the daemon is started through the init system of Android, specified by the file /init.target.rc. Replicant ships with the GPS part commented out. To modify this file, we need to bring out our little toolbox. Modifying the file on the device itself will not work, the root filesystem is extracted from a ramdisk file on every boot. Any changes made to the file will not be persistent. The file /init.target.rc is stored in the boot.img ramdisk, and that is the file we need to modify to make a persistent modification.

First we need the unpackbootimg and mkbootimg tools. If you are lucky, you might find them pre-built for your operating system. I am using Debian and I couldn’t find them easily. Building them from scratch is however not that difficult. Assuming you have a normal build environment (i.e., apt-get install build-essentials) try the following to build the tools. I was inspired by a post on unpacking and editing boot.img for some of the following instructions.

git clone https://github.com/CyanogenMod/android_system_core.git
cd android_system_core/
git checkout cm-13.0 
cd mkbootimg/
gcc -o ./mkbootimg -I ../include ../libmincrypt/*.c ./mkbootimg.c
gcc -o ./unpackbootimg -I ../include ../libmincrypt/*.c ./unpackbootimg.c
sudo cp mkbootimg unpackbootimg /usr/local/bin/

You are now ready to unpack the boot.img file. You will need the replicant ZIP file in your home directory. Also download the small patch I made for the init.target.rc file: https://gitlab.com/snippets/1639447. Save the patch as replicant-6-gps-fix.diff in your home directory.

mkdir t
cd t
unzip ~/replicant-6.0-i9300.zip 
unpackbootimg -i ./boot.img
mkdir ./ramdisk
cd ./ramdisk/
gzip -dc ../boot.img-ramdisk.gz | cpio -imd
patch < ~/replicant-6-gps-fix.diff 

Assuming the patch applied correctly (you should see output like "patching file init.target.rc" at the end) you will now need to put the ramdisk back together.

find . ! -name . | LC_ALL=C sort | cpio -o -H newc -R root:root | gzip > ../new-boot.img-ramdisk.gz
cd ..
mkbootimg --kernel ./boot.img-zImage \
--ramdisk ./new-boot.img-ramdisk.gz \
--second ./boot.img-second \
--cmdline "$(cat ./boot.img-cmdline)" \
--base "$(cat ./boot.img-base)" \
--pagesize "$(cat ./boot.img-pagesize)" \
--dt ./boot.img-dt \
--ramdisk_offset "$(cat ./boot.img-ramdisk_offset)" \
--second_offset "$(cat ./boot.img-second_offset)" \
--tags_offset "$(cat ./boot.img-tags_offset)" \
--output ./new-boot.img

Reboot your phone to the bootloader:

adb reboot bootloader

Then flash the new boot image back to your phone:

heimdall flash --BOOT new-boot.img

The phone will restart. To finalize things, you need the non-free GPS software components glgps, gps.exynos4.so and gps.cer. Before I used a complicated method involving sdat2img.py to extract these files from a CyanogenMod 13.x archive. Fortunately, Lineage OS is now offering downloads containing the relevant files too. You will need to download some files, extract them, and load them onto your phone.

wget https://mirrorbits.lineageos.org/full/i9300/20170125/lineage-14.1-20170125-experimental-i9300-signed.zip
mkdir lineage
cd lineage
unzip ../lineage-14.1-20170125-experimental-i9300-signed.zip
adb root
adb wait-for-device
adb remount
adb push system/bin/glgps /system/bin/
adb push system/lib/hw/gps.exynos4.vendor.so /system/lib/hw/gps.exynos4.so
adb push system/bin/gps.cer /system/bin/

Now reboot your phone and start c:geo and it should find some satellites. Congratulations!

Why I don’t Use 2048 or 4096 RSA Key Sizes

I have used non-standard RSA key size for maybe 15 years. For example, my old OpenPGP key created in 2002. With non-standard key sizes, I mean a RSA key size that is not 2048 or 4096. I do this when I generate OpenPGP/SSH keys (using GnuPG with a smartcard like this) and PKIX certificates (using GnuTLS or OpenSSL, e.g. for XMPP or for HTTPS). People sometimes ask me why. I haven’t seen anyone talk about this, or provide a writeup, that is consistent with my views. So I wanted to write about my motivation, so that it is easy for me to refer to, and hopefully to inspire others to think similarily. Or to provoke discussion and disagreement — that’s fine, and hopefully I will learn something.

Before proceeding, here is some context: When building new things, it is usually better to use the Elliptic Curve technology algorithm Ed25519 instead of RSA. There is also ECDSA — which has had a comparatively slow uptake, for a number of reasons — that is widely available and is a reasonable choice when Ed25519 is not available. There are also post-quantum algorithms, but they are newer and adopting them today requires a careful cost-benefit analysis.

First some background. RSA is an asymmetric public-key scheme, and relies on generating private keys which are the product of distinct prime numbers (typically two). The size of the resulting product, called the modulus n, is usually expressed in bit length and forms the key size. Historically RSA key sizes used to be a couple of hundred bits, then 512 bits settled as a commonly used size. With better understanding of RSA security levels, the common key size evolved into 768, 1024, and later 2048. Today’s recommendations (see keylength.com) suggest that 2048 is on the weak side for long-term keys (5+ years), so there has been a trend to jump to 4096. The performance of RSA private-key operations starts to suffer at 4096, and the bandwidth requirements is causing issues in some protocols. Today 2048 and 4096 are the most common choices.

My preference for non-2048/4096 RSA key sizes is based on the simple and naïve observation that if I would build a RSA key cracker, there is some likelihood that I would need to optimize the implementation for a particular key size in order to get good performance. Since 2048 and 4096 are dominant today, and 1024 were dominent some years ago, it may be feasible to build optimized versions for these three key sizes.

My observation is a conservative decision based on speculation, and speculation on several levels. First I assume that there is an attack on RSA that we don’t know about. Then I assume that this attack is not as efficient for some key sizes than others, either on a theoretical level, at implementation level (optimized libraries for certain characteristics), or at an economic/human level (decision to focus on common key sizes). Then I assume that by avoiding the efficient key sizes I can increase the difficulty to a sufficient level.

Before analyzing whether those assumptions even remotely may make sense, it is useful to understand what is lost by selecting uncommon key sizes. This is to understand the cost of the trade-off.

A significant burden would be if implementations didn’t allow selecting unusual key sizes. In my experience, enough common applications support uncommon key sizes, for example GnuPG, OpenSSL, OpenSSH, FireFox, and Chrome. Some applications limit the permitted choices; this appears to be rare, but I have encountered it once. Some environments also restrict permitted choices, for example I have experienced that LetsEncrypt has introduced a requirement for RSA key sizes to be a multiples of 8. I noticed this since I chose a RSA key size of 3925 for my blog and received a certificate from LetsEncrypt in December 2015 however during renewal in 2016 it lead to an error message about the RSA key size. Some commercial CAs that I have used before restrict the RSA key size to one of 1024, 2048 or 4096 only. Some smart-cards also restrict the key sizes, sadly the YubiKey has this limitation. So it is not always possible, but possible often enough for me to be worthwhile.

Another cost is that RSA signature operations are slowed down. This is because the exponentiation function is faster than multiplication, and if the bit pattern of the RSA key is a 1 followed by several 0’s, it is quicker to compute. I have not done benchmarks, but I have not experienced that this is a practical problem for me. I don’t notice RSA operations in the flurry of all of other operations (network, IO) that is usually involved in my daily life. Deploying this on a large scale may have effects, of course, so benchmarks would be interesting.

Back to the speculation that leads me to this choice. The first assumption is that there is an attack on RSA that we don’t know about. In my mind, until there are proofs that the currently known attacks (GNFS-based attacks) are the best that can be found, or at least some heuristic argument that we can’t do better than the current attacks, the probability for an unknown RSA attack is therefor, as strange as it may sound, 100%.

The second assumption is that the unknown attack(s) are not as efficient for some key sizes than others. That statement can also be expressed like this: the cost to mount the attack is higher for some key sizes compared to others.

At the implementation level, it seems reasonable to assume that implementing a RSA cracker for arbitrary key sizes could be more difficult and costlier than focusing on particular key sizes. Focusing on some key sizes allows optimization and less complex code.

At the mathematical level, the assumption that the attack would be costlier for certain types of RSA key sizes appears dubious. It depends on the kind of algorithm the unknown attack is. For something similar to GNFS attacks, I believe the same algorithm applies equally for a RSA key size of 2048, 2730 and 4096 and that the running time depends mostly on the key size. Other algorithms that could crack RSA, such as some approximation algorithms, does not seem likely to be thwarted by using non-standard RSA key sizes either. I am not a mathematician though.

At the economical or human level, it seems reasonable to say that if you can crack 95% of all keys out there (sizes 1024, 2048, 4096) then that is good enough and cracking the last 5% is just diminishing returns of the investment. Here I am making up the 95% number. Currently, I would guess that more than 95% of all RSA key sizes on the Internet are 1024, 2048 or 4096 though. So this aspect holds as long as people behave as they have done.

The final assumption is that by using non-standard key sizes I raise the bar sufficiently high to make an attack impossible. To be honest, this scenario appears unlikely. However it might increase the cost somewhat, by a factor or two or five. Which might make someone target a lower hanging fruit instead.

Putting my argument together, I have 1) identified some downsides of using non-standard RSA Key sizes and discussed their costs and implications, and 2) mentioned some speculative upsides of using non-standard key sizes. I am not aware of any argument that the odds of my speculation is 0% likely to be true. It appears there is some remote chance, higher than 0%, that my speculation is true. Therefor, my personal conservative approach is to hedge against this unlikely, but still possible, attack scenario by paying the moderate cost to use non-standard RSA key sizes. Of course, the QA engineer in me also likes to break things by not doing what everyone else does, so I end this with an ObXKCD.

Let’s Encrypt Clients

As many others, I have been following the launch of Let’s Encrypt. Let’s Encrypt is a new zero-cost X.509 Certificate Authority that supports the Automated Certificate Management Environment (ACME) protocol. ACME allow you to automate creation and retrieval of HTTPS server certificates. As anyone who has maintained a number of HTTPS servers can attest, this process has unfortunately been manual, error-prone and differ between CAs.

On some of my personal domains, such as this blog.josefsson.org, I have been using the CACert authority to sign the HTTPS server certificate. The problem with CACert is that the CACert trust anchors aren’t shipped with sufficient many operating systems and web browsers. The user experience is similar to reaching a self-signed server certificate. For organization-internal servers that you don’t want to trust external parties for, I continue to believe that running your own CA and distributing it to your users is better than using a public CA (compare my XMPP server certificate setup). But for public servers, availability without prior configuration is more important. Therefor I decided that my public HTTPS servers should use a CA/Browser Forum-approved CA with support for ACME, and as long as Let’s Encrypt is trustworthy and zero-cost, they are a good choice.

I was in need of a free software ACME client, and set out to research what’s out there. Unfortunately, I did not find any web pages that listed the available options and compared them. The Let’s Encrypt CA points to the “official” Let’s Encrypt client, written by Jakub Warmuz, James Kasten, Peter Eckersley and several others. The manual contain pointers to two other clients in a seamingly unrelated section. Those clients are letsencrypt-nosudo by Daniel Roesler et al, and simp_le by (again!) Jakub Warmuz. From the letsencrypt.org’s client-dev mailing list I also found letsencrypt.sh by Gerhard Heift and LetsEncryptShell by Jan Mojžíš. Is anyone aware of other ACME clients?

By comparing these clients, I learned what I did not like in them. I wanted something small so that I can audit it. I want something that doesn’t require root access. Preferably, it should be able to run on my laptop, since I wasn’t ready to run something on the servers. Generally, it has to be Secure, which implies something about how it approaches private key handling. The letsencrypt official client can do everything, and has plugin for various server software to automate the ACME negotiation. All the cryptographic operations appear to be hidden inside the client, which usually means it is not flexible. I really did not like how it was designed, it looks like your typical monolithic proof-of-concept design. The simp_le client looked much cleaner, and gave me a good feeling. The letsencrypt.sh client is simple and written in /bin/sh shell script, but it appeared a bit too simplistic. The LetsEncryptShell looked decent, but I wanted something more automated.

What all of these clients did not have, and that letsencrypt-nosudo client had, was the ability to let me do the private-key operations. All the operations are done interactively on the command-line using OpenSSL. This would allow me to put the ACME user private key, and the HTTPS private key, on a YubiKey, using its PIV applet and techniques similar to what I used to create my SSH host CA. While the HTTPS private key has to be available on the HTTPS server (used to setup TLS connections), I wouldn’t want the ACME user private key to be available there. Similarily, I wouldn’t want to have the ACME or the HTTPS private key on my laptop. The letsencrypt-nosudo tool is otherwise more rough around the edges than the more cleaner simp_le client. However the private key handling aspect was the deciding matter for me.

After fixing some hard-coded limitations on RSA key sizes, getting the cert was as simple as following the letsencrypt-nosudo instructions. I’ll follow up with a later post describing how to put the ACME user private key and the HTTPS server certificate private key on a YubiKey and how to use that with letsencrypt-nosudo.

So you can now enjoy browsing my blog over HTTPS! Thank you Let’s Encrypt!

Automatic Replicant Backup over USB using rsync

I have been using Replicant on the Samsung SIII I9300 for over two years. I have written before on taking a backup of the phone using rsync but recently I automated my setup as described below. This work was prompted by a screen accident with my phone that caused it to die, and I noticed that I hadn’t taken regular backups. I did not lose any data this time, since typically all content I create on the device is immediately synchronized to my clouds. Photos are uploaded by the ownCloud app, SMS Backup+ saves SMS and call logs to my IMAP server, and I use DAVDroid for synchronizing contacts, calendar and task lists with my instance of ownCloud. Still, I strongly believe in regular backups of everything, so it was time to automate this.

For my use-case, taking backups of the phone whenever I connect it to one of my laptops is sufficient. I typically connect it to my laptops for charging at least every other day. My laptops are all running Debian, but this should be applicable to most modern GNU/Linux system. This is not Replicant-specific, although you need a rooted phone. I thought that automating this would be simple, but I got to learn the ins and outs of systemd and udev in the process and this ended up taking the better part of an evening.

I started out adding an udev rule and a small script, thinking I could invoke the backup process from the udev rule. However rsync would magically die after running a few seconds. After an embarrassing long debugging session, finally I found someone with a similar problem which led me to a nice writeup on the topic of running long-running services on udev events. I created a file /etc/udev/rules.d/99-android-backup.rules with the following content:

ACTION=="add", SUBSYSTEMS=="usb", ENV{ID_SERIAL_SHORT}=="323048a5ae82918b", TAG+="systemd", ENV{SYSTEMD_WANTS}+="android-backup@$env{ID_SERIAL_SHORT}.service"
ACTION=="add", SUBSYSTEMS=="usb", ENV{ID_SERIAL_SHORT}=="4df9e09c25e75f63", TAG+="systemd", ENV{SYSTEMD_WANTS}+="android-backup@$env{ID_SERIAL_SHORT}.service"

The serial numbers correspond to the device serial numbers of the two devices I wish to backup. The adb devices command will print them for you, and you need to replace my values with the values from your phones. Next I created a systemd service to describe a oneshot service. The file /etc/systemd/system/android-backup@.service have the following content:

[Service]
Type=oneshot
ExecStart=/usr/local/sbin/android-backup %I

The at-sign (“@”) in the service filename signal that this is a service that takes a parameter. I’m not enough of an udev/systemd person to explain these two files using the proper terminology, but at least you can pattern-match and follow the basic idea of them: the udev rule matches the devices that I’m interested in (I don’t want this to happen to all random Android devices I attach, hence matching against known serial numbers), and it causes a systemd service with a parameter to be started. The systemd service file describe the script to run, and passes on the parameter.

Now for the juicy part, the script. I have /usr/local/sbin/android-backup with the following content.

#!/bin/bash

DIRBASE=/var/backups/android
export ANDROID_SERIAL="$1"

exec 2>&1 | logger

if ! test -d "$DIRBASE-$ANDROID_SERIAL"; then
    echo "could not find directory: $DIRBASE-$ANDROID_SERIAL"
    exit 1
fi

set -x

adb wait-for-device
adb root
adb wait-for-device
adb shell printf "address 127.0.0.1\nuid = root\ngid = root\n[root]\n\tpath = /\n" \> /mnt/secure/rsyncd.conf
adb shell rsync --daemon --no-detach --config=/mnt/secure/rsyncd.conf &
adb forward tcp:6010 tcp:873
sleep 2
rsync -av --delete --exclude /dev --exclude /acct --exclude /sys --exclude /proc rsync://localhost:6010/root/ $DIRBASE-$ANDROID_SERIAL/
: rc $?
adb forward --remove tcp:6010
adb shell rm -f /mnt/secure/rsyncd.conf

This script warrant more detailed explanation. Backups are placed under, e.g., /var/backups/android-323048a5ae82918b/ for later off-site backup (you do backup your laptop, right?). You have to manually create this directory, as a safety catch to not wildly rsync data into non-existing directories. The script logs everything using syslog, so run a tail -F /var/log/syslog& when setting this up. You may want to reduce verbosity of rsync if you prefer (replace rsync -av with rsync -a). The script runs adb wait-for-device which you rightly guessed will wait for the device to settle. Next adb root is invoked to get root on the device (reading all files from the system naturally requires root). It takes some time to switch, so another wait-for-device call is needed. Next a small rsyncd configuration file is created in /mnt/secure/rsyncd.conf on the phone. The file tells rsync do listen on localhost, run as root, and use / as the path. By default, rsyncd is read-only so the host will not be able to upload any data over rsync, just read data out. Next rsync is started on the phone. The adb forward command forwards port 6010 on the laptop to port 873 on the phone (873 is the default rsyncd port). Unfortunately, setting up the TCP forward appears to take some time, and adb wait-for-device will not wait for that to complete, hence an ugly sleep 2 at this point. Next is the rsync invocation itself, which just pulls in everything from the phone to the laptop, excluding some usual suspects. The somewhat cryptic : rc $? merely logs the exit code of the rsync process into syslog. Finally we clean up the TCP forward and remove the rsyncd.conf file that was temporarily created.

This setup appears stable to me. I can plug in a phone and a backup will be taken. I can even plug in both my devices at the same time, and they will run at the same time. If I unplug a device, the script or rsync will error out and systemd cleans up.

If anyone has ideas on how to avoid the ugly temporary rsyncd.conf file or the ugly sleep 2, I’m interested. It would also be nice to not have to do the ‘adb root’ dance, and instead have the phone start the rsync daemon when connecting to my laptop somehow. TCP forwarding might be troublesome on a multi-user system, but my laptops aren’t. Killing rsync on the phone is probably a good idea too. If you have ideas on how to fix any of this, other feedback, or questions, please let me know!