GPS on Replicant 6

I use Replicant on my main Samsung S3 mobile phone. Replicant is a fully free Android distribution. One consequence of the “fully free” means that some functionality is not working properly, because the hardware requires non-free software. I am in the process of upgrading my main phone to the latest beta builds of Replicant 6. Getting GPS to work on Replicant/S3 is not that difficult. I have made the decision that I am willing to compromise on freedom a bit for my Geocaching hobby. I have written before how to get GPS to work on Replicant 4.0 and GPS on Replicant 4.2. When I upgraded to Wolfgang’s Replicant 6 build back in September 2016, it took some time to figure out how to get GPS to work. I prepared notes on non-free firmware on Replicant 6 which included a section on getting GPS to work. Unfortunately, that method requires that you build your own image and has access to the build tree. Which is not for everyone. This writeup explains how to get GPS to work on a Replicant 6 without building your own image. Wolfgang already explained how to add all other non-free firmware to Replicant 6 but it did not cover GPS. The reason is that GPS requires non-free software to run on your main CPU. You should understand the consequences of this before proceeding!

The first step is to download a Replicant 6.0 image, currently they are available from the replicant 6.0 forum thread. Download the replicant-6.0-i9300.zip file and flash it to your phone as usual. Make sure everything (except GPS of course) works, after loading other non-free firmware (Wifi, Bluetooth etc) using "./firmwares.sh i9300 all" that you may want. You can install the Geocaching client c:geo via fdroid by adding fdroid.cgeo.org as a separate repository. Start the app and verify that GPS does not work. Keep the replicant-6.0-i9300.zip file around, you will need it later.

The tricky part about GPS is that the daemon is started through the init system of Android, specified by the file /init.target.rc. Replicant ships with the GPS part commented out. To modify this file, we need to bring out our little toolbox. Modifying the file on the device itself will not work, the root filesystem is extracted from a ramdisk file on every boot. Any changes made to the file will not be persistent. The file /init.target.rc is stored in the boot.img ramdisk, and that is the file we need to modify to make a persistent modification.

First we need the unpackbootimg and mkbootimg tools. If you are lucky, you might find them pre-built for your operating system. I am using Debian and I couldn’t find them easily. Building them from scratch is however not that difficult. Assuming you have a normal build environment (i.e., apt-get install build-essentials) try the following to build the tools. I was inspired by a post on unpacking and editing boot.img for some of the following instructions.

git clone https://github.com/CyanogenMod/android_system_core.git
cd android_system_core/
git checkout cm-13.0 
cd mkbootimg/
gcc -o ./mkbootimg -I ../include ../libmincrypt/*.c ./mkbootimg.c
gcc -o ./unpackbootimg -I ../include ../libmincrypt/*.c ./unpackbootimg.c
sudo cp mkbootimg unpackbootimg /usr/local/bin/

You are now ready to unpack the boot.img file. You will need the replicant ZIP file in your home directory. Also download the small patch I made for the init.target.rc file: https://gitlab.com/snippets/1639447. Save the patch as replicant-6-gps-fix.diff in your home directory.

mkdir t
cd t
unzip ~/replicant-6.0-i9300.zip 
unpackbootimg -i ./boot.img
mkdir ./ramdisk
cd ./ramdisk/
gzip -dc ../boot.img-ramdisk.gz | cpio -imd
patch < ~/replicant-6-gps-fix.diff 

Assuming the patch applied correctly (you should see output like "patching file init.target.rc" at the end) you will now need to put the ramdisk back together.

find . ! -name . | LC_ALL=C sort | cpio -o -H newc -R root:root | gzip > ../new-boot.img-ramdisk.gz
cd ..
mkbootimg --kernel ./boot.img-zImage \
--ramdisk ./new-boot.img-ramdisk.gz \
--second ./boot.img-second \
--cmdline "$(cat ./boot.img-cmdline)" \
--base "$(cat ./boot.img-base)" \
--pagesize "$(cat ./boot.img-pagesize)" \
--dt ./boot.img-dt \
--ramdisk_offset "$(cat ./boot.img-ramdisk_offset)" \
--second_offset "$(cat ./boot.img-second_offset)" \
--tags_offset "$(cat ./boot.img-tags_offset)" \
--output ./new-boot.img

Reboot your phone to the bootloader:

adb reboot bootloader

Then flash the new boot image back to your phone:

heimdall flash --BOOT new-boot.img

The phone will restart. To finalize things, you need the non-free GPS software components glgps, gps.exynos4.so and gps.cer. Before I used a complicated method involving sdat2img.py to extract these files from a CyanogenMod 13.x archive. Fortunately, Lineage OS is now offering downloads containing the relevant files too. You will need to download some files, extract them, and load them onto your phone.

wget https://mirrorbits.lineageos.org/full/i9300/20170125/lineage-14.1-20170125-experimental-i9300-signed.zip
mkdir lineage
cd lineage
unzip ../lineage-14.1-20170125-experimental-i9300-signed.zip
adb root
adb wait-for-device
adb remount
adb push system/bin/glgps /system/bin/
adb push system/lib/hw/gps.exynos4.vendor.so /system/lib/hw/gps.exynos4.so
adb push system/bin/gps.cer /system/bin/

Now reboot your phone and start c:geo and it should find some satellites. Congratulations!

Why I don’t Use 2048 or 4096 RSA Key Sizes

I have used non-standard RSA key size for maybe 15 years. For example, my old OpenPGP key created in 2002. With non-standard key sizes, I mean a RSA key size that is not 2048 or 4096. I do this when I generate OpenPGP/SSH keys (using GnuPG with a smartcard like this) and PKIX certificates (using GnuTLS or OpenSSL, e.g. for XMPP or for HTTPS). People sometimes ask me why. I haven’t seen anyone talk about this, or provide a writeup, that is consistent with my views. So I wanted to write about my motivation, so that it is easy for me to refer to, and hopefully to inspire others to think similarily. Or to provoke discussion and disagreement — that’s fine, and hopefully I will learn something.

Before proceeding, here is some context: When building new things, it is usually better to use the Elliptic Curve technology algorithm Ed25519 instead of RSA. There is also ECDSA — which has had a comparatively slow uptake, for a number of reasons — that is widely available and is a reasonable choice when Ed25519 is not available. There are also post-quantum algorithms, but they are newer and adopting them today requires a careful cost-benefit analysis.

First some background. RSA is an asymmetric public-key scheme, and relies on generating private keys which are the product of distinct prime numbers (typically two). The size of the resulting product, called the modulus n, is usually expressed in bit length and forms the key size. Historically RSA key sizes used to be a couple of hundred bits, then 512 bits settled as a commonly used size. With better understanding of RSA security levels, the common key size evolved into 768, 1024, and later 2048. Today’s recommendations (see keylength.com) suggest that 2048 is on the weak side for long-term keys (5+ years), so there has been a trend to jump to 4096. The performance of RSA private-key operations starts to suffer at 4096, and the bandwidth requirements is causing issues in some protocols. Today 2048 and 4096 are the most common choices.

My preference for non-2048/4096 RSA key sizes is based on the simple and naïve observation that if I would build a RSA key cracker, there is some likelihood that I would need to optimize the implementation for a particular key size in order to get good performance. Since 2048 and 4096 are dominant today, and 1024 were dominent some years ago, it may be feasible to build optimized versions for these three key sizes.

My observation is a conservative decision based on speculation, and speculation on several levels. First I assume that there is an attack on RSA that we don’t know about. Then I assume that this attack is not as efficient for some key sizes than others, either on a theoretical level, at implementation level (optimized libraries for certain characteristics), or at an economic/human level (decision to focus on common key sizes). Then I assume that by avoiding the efficient key sizes I can increase the difficulty to a sufficient level.

Before analyzing whether those assumptions even remotely may make sense, it is useful to understand what is lost by selecting uncommon key sizes. This is to understand the cost of the trade-off.

A significant burden would be if implementations didn’t allow selecting unusual key sizes. In my experience, enough common applications support uncommon key sizes, for example GnuPG, OpenSSL, OpenSSH, FireFox, and Chrome. Some applications limit the permitted choices; this appears to be rare, but I have encountered it once. Some environments also restrict permitted choices, for example I have experienced that LetsEncrypt has introduced a requirement for RSA key sizes to be a multiples of 8. I noticed this since I chose a RSA key size of 3925 for my blog and received a certificate from LetsEncrypt in December 2015 however during renewal in 2016 it lead to an error message about the RSA key size. Some commercial CAs that I have used before restrict the RSA key size to one of 1024, 2048 or 4096 only. Some smart-cards also restrict the key sizes, sadly the YubiKey has this limitation. So it is not always possible, but possible often enough for me to be worthwhile.

Another cost is that RSA signature operations are slowed down. This is because the exponentiation function is faster than multiplication, and if the bit pattern of the RSA key is a 1 followed by several 0’s, it is quicker to compute. I have not done benchmarks, but I have not experienced that this is a practical problem for me. I don’t notice RSA operations in the flurry of all of other operations (network, IO) that is usually involved in my daily life. Deploying this on a large scale may have effects, of course, so benchmarks would be interesting.

Back to the speculation that leads me to this choice. The first assumption is that there is an attack on RSA that we don’t know about. In my mind, until there are proofs that the currently known attacks (GNFS-based attacks) are the best that can be found, or at least some heuristic argument that we can’t do better than the current attacks, the probability for an unknown RSA attack is therefor, as strange as it may sound, 100%.

The second assumption is that the unknown attack(s) are not as efficient for some key sizes than others. That statement can also be expressed like this: the cost to mount the attack is higher for some key sizes compared to others.

At the implementation level, it seems reasonable to assume that implementing a RSA cracker for arbitrary key sizes could be more difficult and costlier than focusing on particular key sizes. Focusing on some key sizes allows optimization and less complex code.

At the mathematical level, the assumption that the attack would be costlier for certain types of RSA key sizes appears dubious. It depends on the kind of algorithm the unknown attack is. For something similar to GNFS attacks, I believe the same algorithm applies equally for a RSA key size of 2048, 2730 and 4096 and that the running time depends mostly on the key size. Other algorithms that could crack RSA, such as some approximation algorithms, does not seem likely to be thwarted by using non-standard RSA key sizes either. I am not a mathematician though.

At the economical or human level, it seems reasonable to say that if you can crack 95% of all keys out there (sizes 1024, 2048, 4096) then that is good enough and cracking the last 5% is just diminishing returns of the investment. Here I am making up the 95% number. Currently, I would guess that more than 95% of all RSA key sizes on the Internet are 1024, 2048 or 4096 though. So this aspect holds as long as people behave as they have done.

The final assumption is that by using non-standard key sizes I raise the bar sufficiently high to make an attack impossible. To be honest, this scenario appears unlikely. However it might increase the cost somewhat, by a factor or two or five. Which might make someone target a lower hanging fruit instead.

Putting my argument together, I have 1) identified some downsides of using non-standard RSA Key sizes and discussed their costs and implications, and 2) mentioned some speculative upsides of using non-standard key sizes. I am not aware of any argument that the odds of my speculation is 0% likely to be true. It appears there is some remote chance, higher than 0%, that my speculation is true. Therefor, my personal conservative approach is to hedge against this unlikely, but still possible, attack scenario by paying the moderate cost to use non-standard RSA key sizes. Of course, the QA engineer in me also likes to break things by not doing what everyone else does, so I end this with an ObXKCD.

Let’s Encrypt Clients

As many others, I have been following the launch of Let’s Encrypt. Let’s Encrypt is a new zero-cost X.509 Certificate Authority that supports the Automated Certificate Management Environment (ACME) protocol. ACME allow you to automate creation and retrieval of HTTPS server certificates. As anyone who has maintained a number of HTTPS servers can attest, this process has unfortunately been manual, error-prone and differ between CAs.

On some of my personal domains, such as this blog.josefsson.org, I have been using the CACert authority to sign the HTTPS server certificate. The problem with CACert is that the CACert trust anchors aren’t shipped with sufficient many operating systems and web browsers. The user experience is similar to reaching a self-signed server certificate. For organization-internal servers that you don’t want to trust external parties for, I continue to believe that running your own CA and distributing it to your users is better than using a public CA (compare my XMPP server certificate setup). But for public servers, availability without prior configuration is more important. Therefor I decided that my public HTTPS servers should use a CA/Browser Forum-approved CA with support for ACME, and as long as Let’s Encrypt is trustworthy and zero-cost, they are a good choice.

I was in need of a free software ACME client, and set out to research what’s out there. Unfortunately, I did not find any web pages that listed the available options and compared them. The Let’s Encrypt CA points to the “official” Let’s Encrypt client, written by Jakub Warmuz, James Kasten, Peter Eckersley and several others. The manual contain pointers to two other clients in a seamingly unrelated section. Those clients are letsencrypt-nosudo by Daniel Roesler et al, and simp_le by (again!) Jakub Warmuz. From the letsencrypt.org’s client-dev mailing list I also found letsencrypt.sh by Gerhard Heift and LetsEncryptShell by Jan Mojžíš. Is anyone aware of other ACME clients?

By comparing these clients, I learned what I did not like in them. I wanted something small so that I can audit it. I want something that doesn’t require root access. Preferably, it should be able to run on my laptop, since I wasn’t ready to run something on the servers. Generally, it has to be Secure, which implies something about how it approaches private key handling. The letsencrypt official client can do everything, and has plugin for various server software to automate the ACME negotiation. All the cryptographic operations appear to be hidden inside the client, which usually means it is not flexible. I really did not like how it was designed, it looks like your typical monolithic proof-of-concept design. The simp_le client looked much cleaner, and gave me a good feeling. The letsencrypt.sh client is simple and written in /bin/sh shell script, but it appeared a bit too simplistic. The LetsEncryptShell looked decent, but I wanted something more automated.

What all of these clients did not have, and that letsencrypt-nosudo client had, was the ability to let me do the private-key operations. All the operations are done interactively on the command-line using OpenSSL. This would allow me to put the ACME user private key, and the HTTPS private key, on a YubiKey, using its PIV applet and techniques similar to what I used to create my SSH host CA. While the HTTPS private key has to be available on the HTTPS server (used to setup TLS connections), I wouldn’t want the ACME user private key to be available there. Similarily, I wouldn’t want to have the ACME or the HTTPS private key on my laptop. The letsencrypt-nosudo tool is otherwise more rough around the edges than the more cleaner simp_le client. However the private key handling aspect was the deciding matter for me.

After fixing some hard-coded limitations on RSA key sizes, getting the cert was as simple as following the letsencrypt-nosudo instructions. I’ll follow up with a later post describing how to put the ACME user private key and the HTTPS server certificate private key on a YubiKey and how to use that with letsencrypt-nosudo.

So you can now enjoy browsing my blog over HTTPS! Thank you Let’s Encrypt!

Automatic Replicant Backup over USB using rsync

I have been using Replicant on the Samsung SIII I9300 for over two years. I have written before on taking a backup of the phone using rsync but recently I automated my setup as described below. This work was prompted by a screen accident with my phone that caused it to die, and I noticed that I hadn’t taken regular backups. I did not lose any data this time, since typically all content I create on the device is immediately synchronized to my clouds. Photos are uploaded by the ownCloud app, SMS Backup+ saves SMS and call logs to my IMAP server, and I use DAVDroid for synchronizing contacts, calendar and task lists with my instance of ownCloud. Still, I strongly believe in regular backups of everything, so it was time to automate this.

For my use-case, taking backups of the phone whenever I connect it to one of my laptops is sufficient. I typically connect it to my laptops for charging at least every other day. My laptops are all running Debian, but this should be applicable to most modern GNU/Linux system. This is not Replicant-specific, although you need a rooted phone. I thought that automating this would be simple, but I got to learn the ins and outs of systemd and udev in the process and this ended up taking the better part of an evening.

I started out adding an udev rule and a small script, thinking I could invoke the backup process from the udev rule. However rsync would magically die after running a few seconds. After an embarrassing long debugging session, finally I found someone with a similar problem which led me to a nice writeup on the topic of running long-running services on udev events. I created a file /etc/udev/rules.d/99-android-backup.rules with the following content:

ACTION=="add", SUBSYSTEMS=="usb", ENV{ID_SERIAL_SHORT}=="323048a5ae82918b", TAG+="systemd", ENV{SYSTEMD_WANTS}+="android-backup@$env{ID_SERIAL_SHORT}.service"
ACTION=="add", SUBSYSTEMS=="usb", ENV{ID_SERIAL_SHORT}=="4df9e09c25e75f63", TAG+="systemd", ENV{SYSTEMD_WANTS}+="android-backup@$env{ID_SERIAL_SHORT}.service"

The serial numbers correspond to the device serial numbers of the two devices I wish to backup. The adb devices command will print them for you, and you need to replace my values with the values from your phones. Next I created a systemd service to describe a oneshot service. The file /etc/systemd/system/android-backup@.service have the following content:

[Service]
Type=oneshot
ExecStart=/usr/local/sbin/android-backup %I

The at-sign (“@”) in the service filename signal that this is a service that takes a parameter. I’m not enough of an udev/systemd person to explain these two files using the proper terminology, but at least you can pattern-match and follow the basic idea of them: the udev rule matches the devices that I’m interested in (I don’t want this to happen to all random Android devices I attach, hence matching against known serial numbers), and it causes a systemd service with a parameter to be started. The systemd service file describe the script to run, and passes on the parameter.

Now for the juicy part, the script. I have /usr/local/sbin/android-backup with the following content.

#!/bin/bash

DIRBASE=/var/backups/android
export ANDROID_SERIAL="$1"

exec 2>&1 | logger

if ! test -d "$DIRBASE-$ANDROID_SERIAL"; then
    echo "could not find directory: $DIRBASE-$ANDROID_SERIAL"
    exit 1
fi

set -x

adb wait-for-device
adb root
adb wait-for-device
adb shell printf "address 127.0.0.1\nuid = root\ngid = root\n[root]\n\tpath = /\n" \> /mnt/secure/rsyncd.conf
adb shell rsync --daemon --no-detach --config=/mnt/secure/rsyncd.conf &
adb forward tcp:6010 tcp:873
sleep 2
rsync -av --delete --exclude /dev --exclude /acct --exclude /sys --exclude /proc rsync://localhost:6010/root/ $DIRBASE-$ANDROID_SERIAL/
: rc $?
adb forward --remove tcp:6010
adb shell rm -f /mnt/secure/rsyncd.conf

This script warrant more detailed explanation. Backups are placed under, e.g., /var/backups/android-323048a5ae82918b/ for later off-site backup (you do backup your laptop, right?). You have to manually create this directory, as a safety catch to not wildly rsync data into non-existing directories. The script logs everything using syslog, so run a tail -F /var/log/syslog& when setting this up. You may want to reduce verbosity of rsync if you prefer (replace rsync -av with rsync -a). The script runs adb wait-for-device which you rightly guessed will wait for the device to settle. Next adb root is invoked to get root on the device (reading all files from the system naturally requires root). It takes some time to switch, so another wait-for-device call is needed. Next a small rsyncd configuration file is created in /mnt/secure/rsyncd.conf on the phone. The file tells rsync do listen on localhost, run as root, and use / as the path. By default, rsyncd is read-only so the host will not be able to upload any data over rsync, just read data out. Next rsync is started on the phone. The adb forward command forwards port 6010 on the laptop to port 873 on the phone (873 is the default rsyncd port). Unfortunately, setting up the TCP forward appears to take some time, and adb wait-for-device will not wait for that to complete, hence an ugly sleep 2 at this point. Next is the rsync invocation itself, which just pulls in everything from the phone to the laptop, excluding some usual suspects. The somewhat cryptic : rc $? merely logs the exit code of the rsync process into syslog. Finally we clean up the TCP forward and remove the rsyncd.conf file that was temporarily created.

This setup appears stable to me. I can plug in a phone and a backup will be taken. I can even plug in both my devices at the same time, and they will run at the same time. If I unplug a device, the script or rsync will error out and systemd cleans up.

If anyone has ideas on how to avoid the ugly temporary rsyncd.conf file or the ugly sleep 2, I’m interested. It would also be nice to not have to do the ‘adb root’ dance, and instead have the phone start the rsync daemon when connecting to my laptop somehow. TCP forwarding might be troublesome on a multi-user system, but my laptops aren’t. Killing rsync on the phone is probably a good idea too. If you have ideas on how to fix any of this, other feedback, or questions, please let me know!

Combining Dnsmasq and Unbound

For my home office network I have been using Dnsmasq for some time. Dnsmasq provides me with DNS, DHCP, DHCPv6, and IPv6 Router Advertisement. I run dnsmasq on a Debian Jessie server, but it works similar with OpenWRT if you want to use a smaller device. My entire /etc/dnsmasq.d/local configuration used to look like this:

dhcp-authoritative
interface=eth1
read-ethers
dhcp-range=192.168.1.100,192.168.1.150,12h
dhcp-range=2001:9b0:104:42::100,2001:9b0:104:42::1500
dhcp-option=option6:dns-server,[::]
enable-ra

Here dhcp-authoritative enable DHCP. interface=eth1 says to listen on eth1 only, which is my internal (IPv4 NAT) network. I try to keep track of the MAC address of all my devices in a /etc/ethers file, so I use read-ethers to have dnsmasq give stable IP addresses for them. The dhcp-range is used to enable DHCP and DHCPv6 on my internal network. The dhcp-option=option6:dns-server,[::] statement is needed to inform the DHCP clients of the DNS resolver’s IPv6 address, otherwise they would only get the IPv4 DNS server address. The enable-ra parameter enables IPv6 router advertisement on the internal network, thereby removing the need to run radvd too — useful since I prefer to use copyleft software.

Recently I had a desire to use DNSSEC, and enabled it in Dnsmasq using the following statements:

dnssec
trust-anchor=.,19036,8,2,49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5
dnssec-check-unsigned

The dnssec keyword enable DNSSEC validation in dnsmasq, using the indicated trust-anchor (get the root-anchors from IANA). The dnssec-check-unsigned deserves some more discussion. The dnsmasq manpage describes it as follows:

As a default, dnsmasq does not check that unsigned DNS replies are legitimate: they are assumed to be valid and passed on (without the “authentic data” bit set, of course). This does not protect against an attacker forging unsigned replies for signed DNS zones, but it is fast. If this flag is set, dnsmasq will check the zones of unsigned replies, to ensure that unsigned replies are allowed in those zones. The cost of this is more upstream queries and slower performance.

For example, this means that dnsmasq’s DNSSEC functionality is not secure against active man-in-the-middle attacks between dnsmasq and the DNS server it is using. Even if example.org used DNSSEC properly, an attacker could fake that it was unsigned to dnsmasq, and I would get potentially incorrect values in return. We all know that the Internet is not a secure place, and your threat model should include active attackers. I believe this mode should be the default in dnsmasq, and users should have to configure dnsmasq to not be in that mode if they really want to (with the obvious security warning).

Running with this enabled for a couple of days resulted in frustration about not being able to reach a couple of domains. The behaviour was that my clients would hang indefinitely or get a SERVFAIL, both resulting in lack of service. You can enable query logging in dnsmasq with log-queries and enabling this I noticed three distinct form of error patterns:

jow13gw dnsmasq 460 - -  forwarded www.fritidsresor.se to 213.80.101.3
jow13gw dnsmasq 460 - -  validation result is BOGUS
jow13gw dnsmasq 547 - -  reply cloudflare-dnssec.net is BOGUS DNSKEY
jow13gw dnsmasq 547 - -  validation result is BOGUS
jow13gw dnsmasq 547 - -  reply linux.conf.au is BOGUS DS
jow13gw dnsmasq 547 - -  validation result is BOGUS

The first only happened intermittently, the second did not cause any noticeable problem, and the final one was reproducible. To be fair, I only found the last example after starting to search for problem reports (see post confirming bug).

At this point, I had a confirmed bug in dnsmasq that affect my use-case. I want to use official packages from Debian on this machine, so installing newer versions manually is not an option. So I started to look into alternatives for DNS resolving, and quickly found Unbound. Installing it was easy:

apt-get install unbound
unbound-control-setup 

I created a local configuration file in /etc/unbound/unbound.conf.d/local.conf as follows:

server:
	interface: 127.0.0.1
	interface: ::1
	interface: 192.168.1.2
	interface: 2001:9b0:104:42::2
	access-control: 127.0.0.1 allow
	access-control: ::1 allow
	access-control: 192.168.1.2/24 allow
	access-control: 2001:9b0:104:42::2/64 allow
	outgoing-interface: 155.4.17.2
	outgoing-interface: 2001:9b0:1:1a04::2
#	log-queries: yes
#	verbosity: 2

The interface keyword determine which IP addresses to listen on, here I used the loopback interface and the local address of the physical network interface for my internal network. The access-control allows recursive DNS resolving from those networks. And outgoing-interface specify my external Internet-connected interface. log-queries and/or verbosity are useful for debugging.

To make things work, dnsmasq has to stop providing DNS services. This can be achieved with the port=0 keyword, however that will also disable informing DHCP clients about the DNS server to use. So this has to be added in manually. I ended up adding the two following lines to /etc/dnsmasq.d/local:

port=0
dhcp-option=option:dns-server,192.168.1.2

Restarting unbound and dnsmasq now leads to working (and secure) internal DNSSEC-aware name resolution over both IPv4 and IPv6. I can verify that resolution works, and that Unbound verify signatures and reject bad domains properly with dig as below, or use online DNSSEC resolver test page although I’m not sure how confident you can be in the result from that page.

$ host linux.conf.au
linux.conf.au has address 192.55.98.190
linux.conf.au mail is handled by 1 linux.org.au.
$ host sigfail.verteiltesysteme.net
;; connection timed out; no servers could be reached
$ 

I use Munin to monitor my services, and I was happy to find a nice Unbound Munin plugin. I installed the file in /usr/share/munin/plugins/ and created a Munin plugin configuration file /etc/munin/plugin-conf.d/unbound as follows:

[unbound*]
user root
env.statefile /var/lib/munin-node/plugin-state/root/unbound.state
env.unbound_conf /etc/unbound/unbound.conf
env.unbound_control /usr/sbin/unbound-control
env.spoof_warn 1000
env.spoof_crit 100000

I run munin-node-configure --shell|sh to enable it. To work unbound has to be configured as well, so I create a /etc/unbound/unbound.conf.d/munin.conf as follows.

server:
	extended-statistics: yes
	statistics-cumulative: no
	statistics-interval: 0
remote-control:
	control-enable: yes

The graphs may be viewed at my munin instance.