Streamlined NTRU Prime sntrup761 goes to IETF

The OpenSSH project added support for a hybrid Streamlined NTRU Prime post-quantum key encapsulation method sntrup761 to strengthen their X25519-based default in their version 8.5 released on 2021-03-03. While there has been a lot of talk about post-quantum crypto generally, my impression has been that there has been a slowdown in implementing and deploying them in the past two years. Why is that? Regardless of the answer, we can try to collaboratively change things, and one effort that appears strangely missing are IETF documents for these algorithms.

Building on some earlier work that added X25519/X448 to SSH, writing a similar document was relatively straight-forward once I had spent a day reading OpenSSH and TinySSH source code to understand how it worked. While I am not perfectly happy with how the final key is derived from the sntrup761/X25519 secrets – it is a SHA512 call on the concatenated secrets – I think the construct deserves to be better documented, to pave the road for increased confidence or better designs. Also, reusing the RFC5656§4 structs makes for a worse specification (one unnecessary normative reference), but probably a simpler implementation. I have published draft-josefsson-ntruprime-ssh-00 here. Credit here goes to Jan Mojžíš of TinySSH that designed the earlier sntrup4591761x25519-sha512@tinyssh.org in 2018, Markus Friedl who added it to OpenSSH in 2019, and Damien Miller that changed it to sntrup761 in 2020. Does anyone have more to add to the history of this work?

Once I had sharpened my xml2rfc skills, preparing a document describing the hybrid construct between the sntrup761 key-encapsulation mechanism and the X25519 key agreement method in a non-SSH fashion was easy. I do not know if this work is useful, but it may serve as a reference for further study. I published draft-josefsson-ntruprime-hybrid-00 here.

Finally, how about a IETF document on the base Streamlined NTRU Prime? Explaining all the details, and especially the math behind it would be a significant effort. I started doing that, but realized it is a subjective call when to stop explaining things. If we can’t assume that the reader knows about lattice math, is a document like this the best place to teach it? I settled for the most minimal approach instead, merely giving an introduction to the algorithm, included SageMath and C reference implementations together with test vectors. The IETF audience rarely understands math, so I think it is better to focus on the bits on the wire and the algorithm interfaces. Everything here was created by the Streamlined NTRU Prime team, I merely modified it a bit hoping I didn’t break too much. I have now published draft-josefsson-ntruprime-streamlined-00 here.

I maintain the IETF documents on my ietf-ntruprime GitLab page, feel free to open merge requests or raise issues to help improve them.

To have confidence in the code was working properly, I ended up preparing a branch with sntrup761 for the GNU-project Nettle and have submitted it upstream for review. I had the misfortune of having to understand and implement NIST’s DRBG-CTR to compute the sntrup761 known-answer tests, and what a mess it is. Why does a deterministic random generator support re-seeding? Why does it support non-full entropy derivation? What’s with the key size vs block size confusion? What’s with the optional parameters? What’s with having multiple algorithm descriptions? Luckily I was able to extract a minimal but working implementation that is easy to read. I can’t locate DRBG-CTR test vectors, anyone? Does anyone have sntrup761 test vectors that doesn’t use DRBG-CTR? One final reflection on publishing known-answer tests for an algorithm that uses random data: are the test vectors stable over different ways to implement the algorithm? Just consider of some optimization moved one randomness-extraction call before another, then wouldn’t the output be different? Are there other ways to verify correctness of implementations?

As always, happy hacking!

Towards pluggable GSS-API modules

GSS-API is a standardized framework that is used by applications to, primarily, support Kerberos V5 authentication. GSS-API is standardized by IETF and supported by protocols like SSH, SMTP, IMAP and HTTP, and implemented by software projects such as OpenSSH, Exim, Dovecot and Apache httpd (via mod_auth_gssapi). The implementations of Kerberos V5 and GSS-API that are packaged for common GNU/Linux distributions, such as Debian, include MIT Kerberos, Heimdal and (less popular) GNU Shishi/GSS.

When an application or library is packaged for a GNU/Linux distribution, a choice is made which GSS-API library to link with. I believe this leads to two problematic consequences: 1) it is difficult for end-users to chose between Kerberos implementation, and 2) dependency bloat for non-Kerberos users. Let’s discuss these separately.

  1. No system admin or end-user choice over the GSS-API/Kerberos implementation used

    There are differences in the bug/feature set of MIT Kerberos and that of Heimdal’s, and definitely that of GNU Shishi. This can lead to a situation where an application (say, Curl) is linked to MIT Kerberos, and someone discovers a Kerberos related problem that would have been working if Heimdal was used, or vice versa. Sometimes it is possible to locally rebuild a package using another set of dependencies. However doing so has a high maintenance cost to track security fixes in future releases. It is an unsatisfying solution for the distribution to flip flop between which library to link to, depending on which users complain the most. To resolve this, a package could be built in two variants: one for MIT Kerberos and one for Heimdal. Both can be shipped. This can help solve the problem, but the question of which variant to install by default leads to similar concerns, and will also eventually leads to dependency conflicts. Consider an application linked to libraries (possible in several steps) where one library only supports MIT Kerberos and one library only supports Heimdal.

    The fact remains that there will continue to be multiple Kerberos implementations. Distributions will continue to support them, and will be faced with the dilemma of which one to link to by default. Distributions and the people who package software will have little guidance on which implementation to chose from their upstream, since most upstream support both implementations. The result is that system administrators and end-users are not given a simple way to have flexibility about which implementation to use.
  2. Dependency bloat for non-Kerberos use-cases.

    Compared to the number of users of GNU/Linux systems out there, the number of Kerberos users on GNU/Linux systems is smaller. Here distributions face another dilemma. Should they enable GSS-API for all applications, to satisfy the Kerberos community, or should they be conservative with adding dependencies to reduce attacker surface for the non-Kerberos users? This is a dilemma with no clear answer, and one approach has been to ship two versions of a package: one with Kerberos support and one without. Another option here is for upstream to support loadable modules, for example Dovecot implement this and Debian ship with a separate ‘dovecot-gssapi’ package that extend the core Dovecot seamlessly. Few except some larger projects appear to be willing to carry that maintenance cost upstream, so most only support build-time linking of the GSS-API library.

    There are a number of real-world situations to consider, but perhaps the easiest one to understand for most GNU/Linux users is OpenSSH. The SSH protocol supports Kerberos via GSS-API, and OpenSSH implement this feature, and most GNU/Linux distributions ship a SSH client and SSH server linked to a GSS-API library. Someone made the choice of linking it to a GSS-API library, for the arguable smaller set of people interested in it, and also the choice which library to link to. Rebuilding OpenSSH locally without Kerberos support comes with a high maintenance cost. Many people will not need or use the Kerberos features of the SSH client or SSH server, and having it enabled by default comes with a security cost. Having a vulnerability in OpenSSH is critical for many systems, and therefor its dependencies are a reasonable concern. Wouldn’t it be nice if OpenSSH was built in a way that didn’t force you to install MIT Kerberos or Heimdal? While still making it easy for Kerberos users to use it, of course.

Hopefully I have made the problem statement clear above, and that I managed to convince you that the state of affairs is in need of improving. I learned of the problems from my personal experience with maintaining GNU SASL in Debian, and for many years I ignored this problem.

Let me introduce Libgssglue!

Matryoshka Dolls
Matryoshka Dolls – photo CC-4.0-BY-NC by PngAll

Libgssglue is a library written by Kevin W. Coffman based on historical GSS-API code, the initial release was in 2004 (using the name libgssapi) and the last release was in 2012. Libgssglue provides a minimal GSS-API library and header file, so that any application can link to it instead of directly to MIT Kerberos or Heimdal (or GNU GSS). The administrator or end-user can select during run-time which GSS-API library to use, through a global /etc/gssapi_mech.conf file or even a local GSSAPI_MECH_CONF environment variable. Libgssglue is written in C, has no external dependencies, and is BSD-style licensed. It was developed for the CITI NFSv4 project but libgssglue ended up not being used.

I have added support to build GNU SASL with libgssglue — the changes required were only ./configure.ac-related since GSS-API is a standardized framework. I have written a fairly involved CI/CD check that builds GNU SASL with MIT Kerberos, Heimdal, libgssglue and GNU GSS, sets ups a local Kerberos KDC and verify successful GSS-API and GS2-KRB5 authentications. The ‘gsasl’ command line tool connects to a local example SMTP server, also based on GNU SASL (linked to all variants of GSS-API libraries), and to a system-installed Dovecot IMAP server that use the MIT Kerberos GSS-API library. This is on Debian but I expect it to be easily adaptable to other GNU/Linux distributions. The check triggered some (expected) Shishi/GSS-related missing features, and triggered one problem related to authorization identities that may be a bug in GNU SASL. However, testing shows that it is possible to link GNU SASL with libgssglue and have it be operational with any choice of GSS-API library that is shipped with Debian. See GitLab CI/CD code and its CI/CD output.

This experiment worked so well that I contacted Kevin to learn that he didn’t have any future plans for the project. I have adopted libgssglue and put up a Libgssglue GitLab project page, and pushed out a libgssglue 0.5 release fixing only some minor build-related issues. There are still some missing newly introduced GSS-API interfaces that could be added, but I haven’t been able to find any critical issues with it. Amazing that an untouched 10 year old project works so well!

My current next steps are:

  • Release GNU SASL with support for Libgssglue and encourage its use in documentation.
  • Make GNU SASL link to Libgssglue in Debian, to avoid a hard dependency on MIT Kerberos, but still allowing a default out-of-the-box Kerberos experience with GNU SASL.
  • Maintain libgssglue upstream and implement self-checks, CI/CD testing, new GSS-API interfaces that have been defined, and generally fix bugs and improve the project. Help appreciated!
  • Maintain the libgssglue package in Debian.
  • Look into if there are applications in Debian that link to a GSS-API library that could instead be linked to libgssglue to allow flexibility for the end-user and reduce dependency bloat.

What do you think? Happy Hacking!

What’s wrong with SCRAM?

Simple Authentication and Security Layer (SASL, RFC4422) is the framework that was abstracted from the IMAP and POP protocols. Among the most popular mechanisms are PLAIN (clear-text passwords, usually under TLS), CRAM-MD5 (RFC2195), and GSSAPI (for Kerberos V5). The DIGEST-MD5 mechanism was an attempt to improve upon the CRAM-MD5 mechanism, but ended up introducing a lot of complexity and insufficient desirable features and deployment was a mess — read RFC6331 for background on why it has been deprecated.

SCRAM!

The effort to develop SCRAM (RFC5802) came, as far as I can tell, from the experiences with DIGEST-MD5 and the desire to offer something better than CRAM-MD5. In protocol design discussions, SCRAM is often still considered as “new” even though the specification was published in 2011 and even that had been in the making for several years. Developers that implement IMAP and SMTP still usually start out with supporting PLAIN and CRAM-MD5. The focus of this blog post is to delve into why this is and inspire the next step in this area. My opinion around this topic has existed for a couple of years already, formed while implementing SCRAM in GNU SASL, and my main triggers to write something about them now are 1) Martin Lambers‘ two-post blog series that first were negative about SCRAM and then became positive, and 2) my desire to work on or support new efforts in this area.

Let’s take a step back and spend some time analyzing PLAIN and CRAM-MD5. What are the perceived advantages and disadvantages?

Advantages: PLAIN and CRAM-MD5 solves the use-case of password-based user authentication, and are easy to implement.

Main disadvantages with PLAIN and CRAM-MD5:

  • PLAIN transfers passwords in clear text to the server (sometimes this is considered an advantage, but from a security point of view, it isn’t).
  • CRAM-MD5 requires that the server stores the password in plaintext (impossible to use a hashed or encrypted format).
  • Non-ASCII support was not there from the start.

A number of (debatable) inconveniences with PLAIN and CRAM-MD5 exists:

  • CRAM-MD5 does not support the notion of authorization identities.
  • The authentication is not bound to a particular secure channel, opening up for tunneling attacks.
  • CRAM-MD5 is based on HMAC-MD5 that is cryptographically “old” (but has withhold well) – the main problem today is that usually MD5 is not something you want to implement since there is diminishing other uses for it.
  • Servers can impersonate the client against other servers since they know the password.
  • Neither offer to authenticate the server to the client.

If you are familiar with SCRAM, you know that it solves these issues. So why hasn’t everyone jumped on it and CRAM-MD5 is now a thing of the past? In the first few years, my answer was that things take time and we’ll see improvements. Today we are ten years later; there are many SCRAM implementations out there, and the Internet has generally migrated away from protocols that have much larger legacy issues (e.g., SSL), but we are still doing CRAM-MD5. I think it is time to become critical of the effort and try to learn from the past. Here is my attempt at summarizing the concerns I’ve seen come up:

  • The mechanism family concept add complexity, in several ways:
    • The specification is harder to understand.
    • New instances of the mechanism family (SCRAM-SHA-256) introduce even more complexity since they tweak some of the poor choices made in the base specification.
    • Introducing new hashes to the family (like the suggested SHA3 variants) adds deployment costs since databases needs new type:value pairs to hold more than one “SCRAM” hashed password.
    • How to negotiate which variant to use is not well-defined. Consider if the server only has access to a SCRAM-SHA-1 hashed password for user X and a SCRAM-SHA-256 hashed password for user Y. What mechanisms should it offer to an unknown client? Offering both is likely to cause authentication failures, and the fall-back behaviour of SASL is poor.
  • The optional support for channel bindings and the way they are negotiated adds complexity.
  • The original default ‘tls-unique’ channel binding turned out to be insecure, and it cannot be supported in TLS 1.3.
  • Support for channel bindings requires interaction between TLS and SASL layers in an application.
  • The feature that servers cannot impersonate a client is dubious: the server only needs to participate in one authentication exchange with the client to gain this ability.
  • SCRAM does not offer any of the cryptographic properties of a Password-authenticated key agreement.

What other concerns are there? I’m likely forgetting some. Some of these are debatable and were intentional design choices.

Can we save SCRAM? I’m happy to see the effort to introduce a new channel binding and update the SCRAM specifications to use it for TLS 1.3+. I brought up a similar approach back in the days when some people were still insisting on ‘tls-unique’. A new channel binding solves some of the issues above.

It is hard to tell what the main reason for not implementing SCRAM more often is. A sense of urgency appears to be lacking. My gut feeling is that to an implementer SCRAM looks awfully similar to DIGEST-MD5. Most of the problems with DIGEST-MD5 could be fixed, but the fixes add more complexity.

How to proceed from here? I see a couple of options:

  • Let time go by to see increased adoption. Improving the channel binding situation will help.
  • Learn from the mistakes and introduce a new simple SCRAM, which could have the following properties:
    • No mechanism family, just one mechanism instance.
    • Hash is hard-coded, just like CRAM-MD5.
    • TLS and a channel binding is required and always used.
  • Review one of the PAKE alternatives and specify a SASL mechanism for it. Preferably without repeating the mistakes of CRAM-MD5, DIGEST-MD5 and SCRAM.
  • Give up on having “complex” authentication mechanisms inside SASL, and help some PAKE variant become implemented through a TLS library, and SASL applications should just use EXTERNAL to use TLS user authentication.

Thoughts?

I feel the following XKCD is appropriate here.

Let’s Encrypt Clients

As many others, I have been following the launch of Let’s Encrypt. Let’s Encrypt is a new zero-cost X.509 Certificate Authority that supports the Automated Certificate Management Environment (ACME) protocol. ACME allow you to automate creation and retrieval of HTTPS server certificates. As anyone who has maintained a number of HTTPS servers can attest, this process has unfortunately been manual, error-prone and differ between CAs.

On some of my personal domains, such as this blog.josefsson.org, I have been using the CACert authority to sign the HTTPS server certificate. The problem with CACert is that the CACert trust anchors aren’t shipped with sufficient many operating systems and web browsers. The user experience is similar to reaching a self-signed server certificate. For organization-internal servers that you don’t want to trust external parties for, I continue to believe that running your own CA and distributing it to your users is better than using a public CA (compare my XMPP server certificate setup). But for public servers, availability without prior configuration is more important. Therefor I decided that my public HTTPS servers should use a CA/Browser Forum-approved CA with support for ACME, and as long as Let’s Encrypt is trustworthy and zero-cost, they are a good choice.

I was in need of a free software ACME client, and set out to research what’s out there. Unfortunately, I did not find any web pages that listed the available options and compared them. The Let’s Encrypt CA points to the “official” Let’s Encrypt client, written by Jakub Warmuz, James Kasten, Peter Eckersley and several others. The manual contain pointers to two other clients in a seamingly unrelated section. Those clients are letsencrypt-nosudo by Daniel Roesler et al, and simp_le by (again!) Jakub Warmuz. From the letsencrypt.org’s client-dev mailing list I also found letsencrypt.sh by Gerhard Heift and LetsEncryptShell by Jan Mojžíš. Is anyone aware of other ACME clients?

By comparing these clients, I learned what I did not like in them. I wanted something small so that I can audit it. I want something that doesn’t require root access. Preferably, it should be able to run on my laptop, since I wasn’t ready to run something on the servers. Generally, it has to be Secure, which implies something about how it approaches private key handling. The letsencrypt official client can do everything, and has plugin for various server software to automate the ACME negotiation. All the cryptographic operations appear to be hidden inside the client, which usually means it is not flexible. I really did not like how it was designed, it looks like your typical monolithic proof-of-concept design. The simp_le client looked much cleaner, and gave me a good feeling. The letsencrypt.sh client is simple and written in /bin/sh shell script, but it appeared a bit too simplistic. The LetsEncryptShell looked decent, but I wanted something more automated.

What all of these clients did not have, and that letsencrypt-nosudo client had, was the ability to let me do the private-key operations. All the operations are done interactively on the command-line using OpenSSL. This would allow me to put the ACME user private key, and the HTTPS private key, on a YubiKey, using its PIV applet and techniques similar to what I used to create my SSH host CA. While the HTTPS private key has to be available on the HTTPS server (used to setup TLS connections), I wouldn’t want the ACME user private key to be available there. Similarily, I wouldn’t want to have the ACME or the HTTPS private key on my laptop. The letsencrypt-nosudo tool is otherwise more rough around the edges than the more cleaner simp_le client. However the private key handling aspect was the deciding matter for me.

After fixing some hard-coded limitations on RSA key sizes, getting the cert was as simple as following the letsencrypt-nosudo instructions. I’ll follow up with a later post describing how to put the ACME user private key and the HTTPS server certificate private key on a YubiKey and how to use that with letsencrypt-nosudo.

So you can now enjoy browsing my blog over HTTPS! Thank you Let’s Encrypt!

Scrypt in IETF

Colin Percival and I have worked on an internet-draft on scrypt for some time. I realize now that the -00 draft was published over two years ago, turning this effort today somewhat into archeology rather than rocket science. Still, having a published RFC that is easy to refer to from other Internet protocols will hopefully help to establish the point that PBKDF2 alone no longer provides state-of-the-art protection for password hashing.

I have written about password hashing before where I give a quick introduction to the basic concepts in the context of the well-known PBKDF2 algorithm. The novelty in scrypt is that it is designed to combat brute force and hardware accelerated attacks on hashed password databases. Briefly, scrypt expands the password and salt (using PBKDF2 as a component) and then uses that to create a large array (typically tens or hundreds of megabytes) using the Salsa20 core hash function and then de-references that large array in a random and sequential pattern. There are three parameters to the scrypt function: a CPU/Memory cost parameter N (varies, typical values are 16384 or 1048576), a blocksize parameter r (typically 8), and a parallelization parameter p (typically a low number like 1 or 16). The process is described in the draft, and there are further discussions in Colin’s original scrypt paper.

The document has been stable for some time, and we are now asking for it to be published. Thus now is good time to provide us with feedback on the document. The live document on gitlab is available if you want to send us a patch.