Static network config with Debian Cloud images

I self-host some services on virtual machines (VMs), and I’m currently using Debian 11.x as the host machine relying on the libvirt infrastructure to manage QEMU/KVM machines. While everything has worked fine for years (including on Debian 10.x), there has always been one issue causing a one-minute delay every time I install a new VM: the default images run a DHCP client that never succeeds in my environment. I never found out a way to disable DHCP in the image, and none of the documented ways through cloud-init that I have tried worked. A couple of days ago, after reading the AlmaLinux wiki I found a solution that works with Debian.

The following commands creates a Debian VM with static network configuration without the annoying one-minute DHCP delay. The three essential cloud-init keywords are the NoCloud meta-data parameters dsmode:local, static network-interfaces setting combined with the user-data bootcmd keyword. I’m using a Raptor CS Talos II ppc64el machine, so replace the image link with a genericcloud amd64 image if you are using x86.

wget https://cloud.debian.org/images/cloud/bullseye/latest/debian-11-generic-ppc64el.qcow2
cp debian-11-generic-ppc64el.qcow2 foo.qcow2

cat>meta-data
dsmode: local
network-interfaces: |
 iface enp0s1 inet static
 address 192.168.98.14/24
 gateway 192.168.98.12
^D

cat>user-data
#cloud-config
fqdn: foo.mydomain
manage_etc_hosts: true
disable_root: false
ssh_pwauth: false
ssh_authorized_keys:
- ssh-ed25519 AAAA...
timezone: Europe/Stockholm
bootcmd:
- rm -f /run/network/interfaces.d/enp0s1
- ifup enp0s1
^D

virt-install --name foo --import --os-variant debian10 --disk foo.qcow2 --cloud-init meta-data=meta-data,user-data=user-data

Unfortunately virt-install from Debian 11 does not support the –cloud-init network-config parameter, so if you want to use a version 2 network configuration with cloud-init (to specify IPv6 addresses, for example) you need to replace the final virt-install command with the following.

cat>network_config_static.cfg
version: 2
 ethernets:
  enp0s1:
   dhcp4: false
   addresses: [ 192.168.98.14/24, fc00::14/7 ]
   gateway4: 192.168.98.12
   gateway6: fc00::12
   nameservers:
    addresses: [ 192.168.98.12, fc00::12 ]
^D

cloud-localds -v -m local --network-config=network_config_static.cfg seed.iso user-data

virt-install --name foo --import --os-variant debian10 --disk foo.qcow2 --disk seed.iso,readonly=on --noreboot

virsh start foo
virsh detach-disk foo vdb --config
virsh console foo

There are still some warnings like the following, but it does not seem to cause any problem:

[FAILED] Failed to start Initial cloud-init job (pre-networking).

Finally, if you do not want the cloud-init tools installed in your VMs, I found the following set of additional user-data commands helpful. Cloud-init will not be enabled on first boot and a cron job will be added that purges some unwanted packages.

runcmd:
- touch /etc/cloud/cloud-init.disabled
- apt-get update && apt-get dist-upgrade -uy && apt-get autoremove --yes --purge && printf '#!/bin/sh\n{ rm /etc/cloud/cloud-init.disabled /etc/cloud/cloud.cfg.d/01_debian_cloud.cfg && apt-get purge --yes cloud-init cloud-guest-utils cloud-initramfs-growroot genisoimage isc-dhcp-client && apt-get autoremove --yes --purge && rm -f /etc/cron.hourly/cloud-cleanup && shutdown --reboot +1; } 2>&1 | logger -t cloud-cleanup\n' > /etc/cron.hourly/cloud-cleanup && chmod +x /etc/cron.hourly/cloud-cleanup && reboot &

The production script I’m using is a bit more complicated, but can be downloaded as vello-vm. Happy hacking!

Towards pluggable GSS-API modules

GSS-API is a standardized framework that is used by applications to, primarily, support Kerberos V5 authentication. GSS-API is standardized by IETF and supported by protocols like SSH, SMTP, IMAP and HTTP, and implemented by software projects such as OpenSSH, Exim, Dovecot and Apache httpd (via mod_auth_gssapi). The implementations of Kerberos V5 and GSS-API that are packaged for common GNU/Linux distributions, such as Debian, include MIT Kerberos, Heimdal and (less popular) GNU Shishi/GSS.

When an application or library is packaged for a GNU/Linux distribution, a choice is made which GSS-API library to link with. I believe this leads to two problematic consequences: 1) it is difficult for end-users to chose between Kerberos implementation, and 2) dependency bloat for non-Kerberos users. Let’s discuss these separately.

  1. No system admin or end-user choice over the GSS-API/Kerberos implementation used

    There are differences in the bug/feature set of MIT Kerberos and that of Heimdal’s, and definitely that of GNU Shishi. This can lead to a situation where an application (say, Curl) is linked to MIT Kerberos, and someone discovers a Kerberos related problem that would have been working if Heimdal was used, or vice versa. Sometimes it is possible to locally rebuild a package using another set of dependencies. However doing so has a high maintenance cost to track security fixes in future releases. It is an unsatisfying solution for the distribution to flip flop between which library to link to, depending on which users complain the most. To resolve this, a package could be built in two variants: one for MIT Kerberos and one for Heimdal. Both can be shipped. This can help solve the problem, but the question of which variant to install by default leads to similar concerns, and will also eventually leads to dependency conflicts. Consider an application linked to libraries (possible in several steps) where one library only supports MIT Kerberos and one library only supports Heimdal.

    The fact remains that there will continue to be multiple Kerberos implementations. Distributions will continue to support them, and will be faced with the dilemma of which one to link to by default. Distributions and the people who package software will have little guidance on which implementation to chose from their upstream, since most upstream support both implementations. The result is that system administrators and end-users are not given a simple way to have flexibility about which implementation to use.
  2. Dependency bloat for non-Kerberos use-cases.

    Compared to the number of users of GNU/Linux systems out there, the number of Kerberos users on GNU/Linux systems is smaller. Here distributions face another dilemma. Should they enable GSS-API for all applications, to satisfy the Kerberos community, or should they be conservative with adding dependencies to reduce attacker surface for the non-Kerberos users? This is a dilemma with no clear answer, and one approach has been to ship two versions of a package: one with Kerberos support and one without. Another option here is for upstream to support loadable modules, for example Dovecot implement this and Debian ship with a separate ‘dovecot-gssapi’ package that extend the core Dovecot seamlessly. Few except some larger projects appear to be willing to carry that maintenance cost upstream, so most only support build-time linking of the GSS-API library.

    There are a number of real-world situations to consider, but perhaps the easiest one to understand for most GNU/Linux users is OpenSSH. The SSH protocol supports Kerberos via GSS-API, and OpenSSH implement this feature, and most GNU/Linux distributions ship a SSH client and SSH server linked to a GSS-API library. Someone made the choice of linking it to a GSS-API library, for the arguable smaller set of people interested in it, and also the choice which library to link to. Rebuilding OpenSSH locally without Kerberos support comes with a high maintenance cost. Many people will not need or use the Kerberos features of the SSH client or SSH server, and having it enabled by default comes with a security cost. Having a vulnerability in OpenSSH is critical for many systems, and therefor its dependencies are a reasonable concern. Wouldn’t it be nice if OpenSSH was built in a way that didn’t force you to install MIT Kerberos or Heimdal? While still making it easy for Kerberos users to use it, of course.

Hopefully I have made the problem statement clear above, and that I managed to convince you that the state of affairs is in need of improving. I learned of the problems from my personal experience with maintaining GNU SASL in Debian, and for many years I ignored this problem.

Let me introduce Libgssglue!

Matryoshka Dolls
Matryoshka Dolls – photo CC-4.0-BY-NC by PngAll

Libgssglue is a library written by Kevin W. Coffman based on historical GSS-API code, the initial release was in 2004 (using the name libgssapi) and the last release was in 2012. Libgssglue provides a minimal GSS-API library and header file, so that any application can link to it instead of directly to MIT Kerberos or Heimdal (or GNU GSS). The administrator or end-user can select during run-time which GSS-API library to use, through a global /etc/gssapi_mech.conf file or even a local GSSAPI_MECH_CONF environment variable. Libgssglue is written in C, has no external dependencies, and is BSD-style licensed. It was developed for the CITI NFSv4 project but libgssglue ended up not being used.

I have added support to build GNU SASL with libgssglue — the changes required were only ./configure.ac-related since GSS-API is a standardized framework. I have written a fairly involved CI/CD check that builds GNU SASL with MIT Kerberos, Heimdal, libgssglue and GNU GSS, sets ups a local Kerberos KDC and verify successful GSS-API and GS2-KRB5 authentications. The ‘gsasl’ command line tool connects to a local example SMTP server, also based on GNU SASL (linked to all variants of GSS-API libraries), and to a system-installed Dovecot IMAP server that use the MIT Kerberos GSS-API library. This is on Debian but I expect it to be easily adaptable to other GNU/Linux distributions. The check triggered some (expected) Shishi/GSS-related missing features, and triggered one problem related to authorization identities that may be a bug in GNU SASL. However, testing shows that it is possible to link GNU SASL with libgssglue and have it be operational with any choice of GSS-API library that is shipped with Debian. See GitLab CI/CD code and its CI/CD output.

This experiment worked so well that I contacted Kevin to learn that he didn’t have any future plans for the project. I have adopted libgssglue and put up a Libgssglue GitLab project page, and pushed out a libgssglue 0.5 release fixing only some minor build-related issues. There are still some missing newly introduced GSS-API interfaces that could be added, but I haven’t been able to find any critical issues with it. Amazing that an untouched 10 year old project works so well!

My current next steps are:

  • Release GNU SASL with support for Libgssglue and encourage its use in documentation.
  • Make GNU SASL link to Libgssglue in Debian, to avoid a hard dependency on MIT Kerberos, but still allowing a default out-of-the-box Kerberos experience with GNU SASL.
  • Maintain libgssglue upstream and implement self-checks, CI/CD testing, new GSS-API interfaces that have been defined, and generally fix bugs and improve the project. Help appreciated!
  • Maintain the libgssglue package in Debian.
  • Look into if there are applications in Debian that link to a GSS-API library that could instead be linked to libgssglue to allow flexibility for the end-user and reduce dependency bloat.

What do you think? Happy Hacking!

What’s wrong with SCRAM?

Simple Authentication and Security Layer (SASL, RFC4422) is the framework that was abstracted from the IMAP and POP protocols. Among the most popular mechanisms are PLAIN (clear-text passwords, usually under TLS), CRAM-MD5 (RFC2195), and GSSAPI (for Kerberos V5). The DIGEST-MD5 mechanism was an attempt to improve upon the CRAM-MD5 mechanism, but ended up introducing a lot of complexity and insufficient desirable features and deployment was a mess — read RFC6331 for background on why it has been deprecated.

SCRAM!

The effort to develop SCRAM (RFC5802) came, as far as I can tell, from the experiences with DIGEST-MD5 and the desire to offer something better than CRAM-MD5. In protocol design discussions, SCRAM is often still considered as “new” even though the specification was published in 2011 and even that had been in the making for several years. Developers that implement IMAP and SMTP still usually start out with supporting PLAIN and CRAM-MD5. The focus of this blog post is to delve into why this is and inspire the next step in this area. My opinion around this topic has existed for a couple of years already, formed while implementing SCRAM in GNU SASL, and my main triggers to write something about them now are 1) Martin Lambers‘ two-post blog series that first were negative about SCRAM and then became positive, and 2) my desire to work on or support new efforts in this area.

Let’s take a step back and spend some time analyzing PLAIN and CRAM-MD5. What are the perceived advantages and disadvantages?

Advantages: PLAIN and CRAM-MD5 solves the use-case of password-based user authentication, and are easy to implement.

Main disadvantages with PLAIN and CRAM-MD5:

  • PLAIN transfers passwords in clear text to the server (sometimes this is considered an advantage, but from a security point of view, it isn’t).
  • CRAM-MD5 requires that the server stores the password in plaintext (impossible to use a hashed or encrypted format).
  • Non-ASCII support was not there from the start.

A number of (debatable) inconveniences with PLAIN and CRAM-MD5 exists:

  • CRAM-MD5 does not support the notion of authorization identities.
  • The authentication is not bound to a particular secure channel, opening up for tunneling attacks.
  • CRAM-MD5 is based on HMAC-MD5 that is cryptographically “old” (but has withhold well) – the main problem today is that usually MD5 is not something you want to implement since there is diminishing other uses for it.
  • Servers can impersonate the client against other servers since they know the password.
  • Neither offer to authenticate the server to the client.

If you are familiar with SCRAM, you know that it solves these issues. So why hasn’t everyone jumped on it and CRAM-MD5 is now a thing of the past? In the first few years, my answer was that things take time and we’ll see improvements. Today we are ten years later; there are many SCRAM implementations out there, and the Internet has generally migrated away from protocols that have much larger legacy issues (e.g., SSL), but we are still doing CRAM-MD5. I think it is time to become critical of the effort and try to learn from the past. Here is my attempt at summarizing the concerns I’ve seen come up:

  • The mechanism family concept add complexity, in several ways:
    • The specification is harder to understand.
    • New instances of the mechanism family (SCRAM-SHA-256) introduce even more complexity since they tweak some of the poor choices made in the base specification.
    • Introducing new hashes to the family (like the suggested SHA3 variants) adds deployment costs since databases needs new type:value pairs to hold more than one “SCRAM” hashed password.
    • How to negotiate which variant to use is not well-defined. Consider if the server only has access to a SCRAM-SHA-1 hashed password for user X and a SCRAM-SHA-256 hashed password for user Y. What mechanisms should it offer to an unknown client? Offering both is likely to cause authentication failures, and the fall-back behaviour of SASL is poor.
  • The optional support for channel bindings and the way they are negotiated adds complexity.
  • The original default ‘tls-unique’ channel binding turned out to be insecure, and it cannot be supported in TLS 1.3.
  • Support for channel bindings requires interaction between TLS and SASL layers in an application.
  • The feature that servers cannot impersonate a client is dubious: the server only needs to participate in one authentication exchange with the client to gain this ability.
  • SCRAM does not offer any of the cryptographic properties of a Password-authenticated key agreement.

What other concerns are there? I’m likely forgetting some. Some of these are debatable and were intentional design choices.

Can we save SCRAM? I’m happy to see the effort to introduce a new channel binding and update the SCRAM specifications to use it for TLS 1.3+. I brought up a similar approach back in the days when some people were still insisting on ‘tls-unique’. A new channel binding solves some of the issues above.

It is hard to tell what the main reason for not implementing SCRAM more often is. A sense of urgency appears to be lacking. My gut feeling is that to an implementer SCRAM looks awfully similar to DIGEST-MD5. Most of the problems with DIGEST-MD5 could be fixed, but the fixes add more complexity.

How to proceed from here? I see a couple of options:

  • Let time go by to see increased adoption. Improving the channel binding situation will help.
  • Learn from the mistakes and introduce a new simple SCRAM, which could have the following properties:
    • No mechanism family, just one mechanism instance.
    • Hash is hard-coded, just like CRAM-MD5.
    • TLS and a channel binding is required and always used.
  • Review one of the PAKE alternatives and specify a SASL mechanism for it. Preferably without repeating the mistakes of CRAM-MD5, DIGEST-MD5 and SCRAM.
  • Give up on having “complex” authentication mechanisms inside SASL, and help some PAKE variant become implemented through a TLS library, and SASL applications should just use EXTERNAL to use TLS user authentication.

Thoughts?

I feel the following XKCD is appropriate here.

OpenPGP smartcard with GNOME on Debian 11 Bullseye

The Debian operating system is what I have been using on my main computer for what is probably around 20 years. I am now in the process of installing the hopefully soon released Debian 11 “bullseye” on my Lenovo X201 laptop. Getting a OpenPGP smartcard to work has almost always required some additional effort, but it has been reliable enough to use exclusively for my daily GnuPG and SSH operations since 2006. In the early days, the issues with smartcards were not related to GNOME, see my smartcard notes for Debian 4 Etch for example. I believe with Debian 5 Lenny, Debian 6 Squeeze, and Debian 7 Stretch things just worked without workarounds, even with GNOME. Those were the golden days! Back in 2015, with Debian 8 Jessie I noticed a regression and came up with a workaround. The problems in GNOME were not fixed, and I wrote about how to work around this for Debian 9 Stretch and the slightly different workaround needed for Debian 10 Buster. What will Bullseye be like?

The first impression of working with GnuPG and a smartcard is still the same. After inserting the GNUK that holds my private keys into my laptop, nothing happens by default and attempting to access the smartcard results in the following.

jas@latte:~$ gpg --card-status
gpg: error getting version from 'scdaemon': No SmartCard daemon
gpg: OpenPGP card not available: No SmartCard daemon
jas@latte:~$ 

The solution is to install the scdaemon package. My opinion is that either something should offer to install it when the device is inserted (wasn’t there a framework for discovering hardware and installing the right packages?) or this package should always be installed for a desktop system. Anyway, the following solves the problem.

jas@latte:~$ sudo apt install scdaemon
...
jas@latte:~$ gpg --card-status
 Reader ………..: 234B:0000:FSIJ-1.2.14-67252015:0
 Application ID …: D276000124010200FFFE672520150000
...
 URL of public key : https://josefsson.org/key-20190320.txt
...

Before the private key in the smartcard can be used, the public key must be imported into GnuPG. I now believe the best way to do this (see earlier posts for alternatives) is to configure the smartcard with a public key URL and retrieve it as follows.

jas@latte:~$ gpg --card-edit
 Reader ………..: 234B:0000:FSIJ-1.2.14-67252015:0
...
 gpg/card> fetch
 gpg: requesting key from 'https://josefsson.org/key-20190320.txt'
 gpg: key D73CF638C53C06BE: public key "Simon Josefsson simon@josefsson.org" imported
 gpg: Total number processed: 1
 gpg:               imported: 1
 gpg/card> quit
jas@latte:~$ gpg -K
 /home/jas/.gnupg/pubring.kbx
 sec#  ed25519 2019-03-20 [SC] [expires: 2021-08-21]
       B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
 uid           [ unknown] Simon Josefsson simon@josefsson.org
 ssb>  ed25519 2019-03-20 [A] [expires: 2021-08-21]
 ssb>  ed25519 2019-03-20 [S] [expires: 2021-08-21]
 ssb>  cv25519 2019-03-20 [E] [expires: 2021-08-21]
jas@latte:~$ 

The next step is to mark your own key as ultimately trusted, use the fingerprint shown above together with gpg --import-ownertrust. Warning! This is not the general way to remove the warning about untrusted keys, this method should only be used for your own keys.

jas@latte:~$ echo "B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE:6:" | gpg --import-ownertrust
gpg: inserting ownertrust of 6
jas@latte:~$ gpg -K
gpg: checking the trustdb
gpg: marginals needed: 3  completes needed: 1  trust model: pgp
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: next trustdb check due at 2021-08-21
 /home/jas/.gnupg/pubring.kbx
sec#  ed25519 2019-03-20 [SC] [expires: 2021-08-21]
       B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
uid           [ultimate] Simon Josefsson simon@josefsson.org
ssb>  cv25519 2019-03-20 [E] [expires: 2021-08-21]
ssb>  ed25519 2019-03-20 [A] [expires: 2021-08-21]
ssb>  ed25519 2019-03-20 [S] [expires: 2021-08-21]
jas@latte:~$ 

Now GnuPG is able to both sign, encrypt, and decrypt data:

jas@latte:~$ echo foo|gpg -a --sign|gpg --verify
 gpg: Signature made Sat May  1 16:02:49 2021 CEST
 gpg:                using EDDSA key A3CC9C870B9D310ABAD4CF2F51722B08FE4745A2
 gpg: Good signature from "Simon Josefsson simon@josefsson.org" [ultimate]
 jas@latte:~$ echo foo|gpg -a --encrypt -r simon@josefsson.org|gpg --decrypt
 gpg: encrypted with 256-bit ECDH key, ID 02923D7EE76EBD60, created 2019-03-20
       "Simon Josefsson simon@josefsson.org"
 foo
jas@latte:~$ 

To make SSH work with the smartcard, the following is the GNOME-related workaround that is still required. The problem is that the GNOME keyring enables its own incomplete SSH-agent implementation. It is lacking the smartcard support that the GnuPG agent can provide, and even set the SSH_AUTH_SOCK environment variable if the enable-ssh-support parameter is provided.

jas@latte:~$ ssh-add -L
 The agent has no identities.
jas@latte:~$ echo $SSH_AUTH_SOCK 
 /run/user/1000/keyring/ssh
jas@latte:~$ mkdir -p ~/.config/autostart
jas@latte:~$ cp /etc/xdg/autostart/gnome-keyring-ssh.desktop ~/.config/autostart/
jas@latte:~$ echo 'Hidden=true' >> .config/autostart/gnome-keyring-ssh.desktop 
jas@latte:~$ echo enable-ssh-support >> ~/.gnupg/gpg-agent.conf

For some reason, it does not seem sufficient to log out of GNOME and then login again. Most likely some daemon is still running, that has to be restarted. At this point, I reboot my laptop and then log into GNOME again. Finally it looks correct:

jas@latte:~$ echo $SSH_AUTH_SOCK 
 /run/user/1000/gnupg/S.gpg-agent.ssh
jas@latte:~$ ssh-add -L
 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILzCFcHHrKzVSPDDarZPYqn89H5TPaxwcORgRg+4DagE cardno:FFFE67252015
jas@latte:~$ 

Please discuss in small groups the following topics:

  • How should the scdaemon package be installed more automatically?
  • Should there a simple command to retrieve the public key for a smartcard and set it as ultimately trusted? The two step --card-edit and --import-ownertrust steps is a bad user interface and is not intuitive in my opinion.
  • Why is GNOME keyring used for SSH keys instead of ssh-agent/gpg-agent?
  • Should gpg-agent have enable-ssh-support on by default?

After these years, I would probably feel a bit of sadness if the problems were fixed, since then I wouldn’t be able to rant about this problem and celebrate installing Debian 12 Bookworm the same way I have done for the some past releases.

Thanks for reading and happy hacking!

Passive Icinga Checks: icinga-pusher

I use Icinga to monitor the availability of my Debian/OpenWRT/etc machines. I have relied on server-side checks on the Icinga system that monitor the externally visible operations of the services that I care about. In theory, monitoring externally visible properties should be good enough. Recently I had one strange incident that was due to an out of disk space on one system. This prompted me to revisit my thinking, and to start monitor internal factors as well. This would allow me to detect problems before they happen, such as an out of disk space condition.

Another reason that I only had server-side checks was that I didn’t like the complexity of the Icinga agent nor wanted to open up for incoming SSH connections from the Icinga server on my other servers. Complexity and machine-based authorization tend to lead to security problems so I prefer to avoid them. The manual mentions agents that use the REST API which was that start of my journey into something better.

What I would prefer is for the hosts to push their self-test results to the central Icinga server. Fortunately, there is a Icinga REST API in modern versions of Icinga (including version 2.10 that I use). The process-check-result API can be used to submit passive check results. Getting this up and running required a bit more research and creativity than I would have hoped for, so I thought it was material enough for a blog post. My main requirement was to keep complexity down, hence I ended up with a simple shell script that is run from cron. None of the existing API clients mentioned in the manual appealed to me.

Prepare the Icinga server with some configuration changes to support the push clients (replace blahonga with a fresh long random password).

icinga# cat > /etc/icinga2/conf.d/api-users.conf
object ApiUser "pusher" {
  password = "blahonga"
  permissions = [ "actions/process-check-result" ]
}
^D
icinga# icinga2 feature enable api && systemctl reload icinga2

Then add some Service definitions and assign it to some hosts, to /etc/icinga2/conf.d/services.conf:

apply Service "passive-disk" {
  import "generic-service"
  check_command = "passive"
  check_interval = 2h
  assign where host.vars.os == "Debian"
}
apply Service "passive-apt" {
  import "generic-service"
  check_command = "passive"
  check_interval = 2h
  assign where host.vars.os == "Debian"
 }

I’m using a relaxed check interval of 2 hours because I will submit results from a cron job that is run every hour. The next step is to setup the machines to submit the results. Create a /etc/cron.d/icinga-pusher with the content below. Note that % characters needs to be escaped in crontab files. I’m running this as the munin user which is a non-privileged account that exists on all of my machines, but you may want to modify this. The check_disk command comes from the monitoring-plugins-basic Debian package, which includes other useful plugins like check_apt that I recommend.

30 * * * * munin /usr/local/bin/icinga-pusher `hostname -f` passive-apt /usr/lib/nagios/plugins/check_apt

40 * * * * munin /usr/local/bin/icinga-pusher `hostname -f` passive-disk "/usr/lib/nagios/plugins/check_disk -w 20\% -c 5\% -X tmpfs -X devtmpfs"

My icinga-pusher script requires a configuration file with some information about the Icinga setup. Put the following content in /etc/default/icinga-pusher (again replacing blahonga with your password):

ICINGA_PUSHER_CREDS="-u pusher:blahonga"
ICINGA_PUSHER_URL="https://icinga.yoursite.com:5665"
ICINGA_PUSHER_CA="-k"

The parameters above are used by the icinga-pusher script. The ICINGA_PUSHER_CREDS contain the api user credentials, either a simple "-u user:password" combination or it could be "--cert /etc/ssl/yourclient.crt --key /etc/ssl/yourclient.key". The ICINGA_PUSHER_URL is the base URL of your Icinga setup, for the API port which is usually 5665. The ICINGA_PUSHER_CA is "--cacert /etc/ssl/icingaca.crt" or "-k" to not use any CA verification (not recommended!).

Below is the script icinga-pusher itself. Some error handling has been removed for brevity — I have put the script in a separate “icinga-pusher” git repository which will be where I make any updates to this project in the future.

#!/bin/sh

# Copyright (C) 2019 Simon Josefsson.
# Released under the GPLv3+ license.

. /etc/default/icinga-pusher

HOST="$1"
SERVICE="$2"
CMD="$3"

OUT=$($CMD)
RC=$?

oIFS="$IFS"
IFS='|'
set -- $OUT
IFS="$oIFS"

OUTPUT="$1"
PERFORMANCE="$2"

data='{ "type": "Service", "filter": "host.name==\"'$HOST'\" && service.name==\"'$SERVICE'\"", "exit_status": '$RC', "plugin_output": "'$OUTPUT'", "performance_data": "'$PERFORMANCE'" }'

curl $ICINGA_PUSHER_CA $ICINGA_PUSHER_CREDS \
     -s -H 'Accept: application/json' -X POST \
     "$ICINGA_PUSHER_URL/v1/actions/process-check-result" \
     -d "$data"

exit 0

What do you think? Is there a simpler way of achieving what I want? Thanks for reading.