Why I don’t Use 2048 or 4096 RSA Key Sizes

I have used non-standard RSA key size for maybe 15 years. For example, my old OpenPGP key created in 2002. With non-standard key sizes, I mean a RSA key size that is not 2048 or 4096. I do this when I generate OpenPGP/SSH keys (using GnuPG with a smartcard like this) and PKIX certificates (using GnuTLS or OpenSSL, e.g. for XMPP or for HTTPS). People sometimes ask me why. I haven’t seen anyone talk about this, or provide a writeup, that is consistent with my views. So I wanted to write about my motivation, so that it is easy for me to refer to, and hopefully to inspire others to think similarily. Or to provoke discussion and disagreement — that’s fine, and hopefully I will learn something.

Before proceeding, here is some context: When building new things, it is usually better to use the Elliptic Curve technology algorithm Ed25519 instead of RSA. There is also ECDSA — which has had a comparatively slow uptake, for a number of reasons — that is widely available and is a reasonable choice when Ed25519 is not available. There are also post-quantum algorithms, but they are newer and adopting them today requires a careful cost-benefit analysis.

First some background. RSA is an asymmetric public-key scheme, and relies on generating private keys which are the product of distinct prime numbers (typically two). The size of the resulting product, called the modulus n, is usually expressed in bit length and forms the key size. Historically RSA key sizes used to be a couple of hundred bits, then 512 bits settled as a commonly used size. With better understanding of RSA security levels, the common key size evolved into 768, 1024, and later 2048. Today’s recommendations (see keylength.com) suggest that 2048 is on the weak side for long-term keys (5+ years), so there has been a trend to jump to 4096. The performance of RSA private-key operations starts to suffer at 4096, and the bandwidth requirements is causing issues in some protocols. Today 2048 and 4096 are the most common choices.

My preference for non-2048/4096 RSA key sizes is based on the simple and naïve observation that if I would build a RSA key cracker, there is some likelihood that I would need to optimize the implementation for a particular key size in order to get good performance. Since 2048 and 4096 are dominant today, and 1024 were dominent some years ago, it may be feasible to build optimized versions for these three key sizes.

My observation is a conservative decision based on speculation, and speculation on several levels. First I assume that there is an attack on RSA that we don’t know about. Then I assume that this attack is not as efficient for some key sizes than others, either on a theoretical level, at implementation level (optimized libraries for certain characteristics), or at an economic/human level (decision to focus on common key sizes). Then I assume that by avoiding the efficient key sizes I can increase the difficulty to a sufficient level.

Before analyzing whether those assumptions even remotely may make sense, it is useful to understand what is lost by selecting uncommon key sizes. This is to understand the cost of the trade-off.

A significant burden would be if implementations didn’t allow selecting unusual key sizes. In my experience, enough common applications support uncommon key sizes, for example GnuPG, OpenSSL, OpenSSH, FireFox, and Chrome. Some applications limit the permitted choices; this appears to be rare, but I have encountered it once. Some environments also restrict permitted choices, for example I have experienced that LetsEncrypt has introduced a requirement for RSA key sizes to be a multiples of 8. I noticed this since I chose a RSA key size of 3925 for my blog and received a certificate from LetsEncrypt in December 2015 however during renewal in 2016 it lead to an error message about the RSA key size. Some commercial CAs that I have used before restrict the RSA key size to one of 1024, 2048 or 4096 only. Some smart-cards also restrict the key sizes, sadly the YubiKey has this limitation. So it is not always possible, but possible often enough for me to be worthwhile.

Another cost is that RSA signature operations are slowed down. This is because the exponentiation function is faster than multiplication, and if the bit pattern of the RSA key is a 1 followed by several 0’s, it is quicker to compute. I have not done benchmarks, but I have not experienced that this is a practical problem for me. I don’t notice RSA operations in the flurry of all of other operations (network, IO) that is usually involved in my daily life. Deploying this on a large scale may have effects, of course, so benchmarks would be interesting.

Back to the speculation that leads me to this choice. The first assumption is that there is an attack on RSA that we don’t know about. In my mind, until there are proofs that the currently known attacks (GNFS-based attacks) are the best that can be found, or at least some heuristic argument that we can’t do better than the current attacks, the probability for an unknown RSA attack is therefor, as strange as it may sound, 100%.

The second assumption is that the unknown attack(s) are not as efficient for some key sizes than others. That statement can also be expressed like this: the cost to mount the attack is higher for some key sizes compared to others.

At the implementation level, it seems reasonable to assume that implementing a RSA cracker for arbitrary key sizes could be more difficult and costlier than focusing on particular key sizes. Focusing on some key sizes allows optimization and less complex code.

At the mathematical level, the assumption that the attack would be costlier for certain types of RSA key sizes appears dubious. It depends on the kind of algorithm the unknown attack is. For something similar to GNFS attacks, I believe the same algorithm applies equally for a RSA key size of 2048, 2730 and 4096 and that the running time depends mostly on the key size. Other algorithms that could crack RSA, such as some approximation algorithms, does not seem likely to be thwarted by using non-standard RSA key sizes either. I am not a mathematician though.

At the economical or human level, it seems reasonable to say that if you can crack 95% of all keys out there (sizes 1024, 2048, 4096) then that is good enough and cracking the last 5% is just diminishing returns of the investment. Here I am making up the 95% number. Currently, I would guess that more than 95% of all RSA key sizes on the Internet are 1024, 2048 or 4096 though. So this aspect holds as long as people behave as they have done.

The final assumption is that by using non-standard key sizes I raise the bar sufficiently high to make an attack impossible. To be honest, this scenario appears unlikely. However it might increase the cost somewhat, by a factor or two or five. Which might make someone target a lower hanging fruit instead.

Putting my argument together, I have 1) identified some downsides of using non-standard RSA Key sizes and discussed their costs and implications, and 2) mentioned some speculative upsides of using non-standard key sizes. I am not aware of any argument that the odds of my speculation is 0% likely to be true. It appears there is some remote chance, higher than 0%, that my speculation is true. Therefor, my personal conservative approach is to hedge against this unlikely, but still possible, attack scenario by paying the moderate cost to use non-standard RSA key sizes. Of course, the QA engineer in me also likes to break things by not doing what everyone else does, so I end this with an ObXKCD.

26 Replies to “Why I don’t Use 2048 or 4096 RSA Key Sizes”

  1. How many valid RSA public keys are there are that are exactly N bits in length (that is, bit N-1 is 1 and all bits >= N are 0)?

    How many valid RSA public keys are there that are less than N bits in length?

    As an approximation, consider how many non-negative integers there are that meet these size constraints. There are exactly as many N-bit non-negative integers as there are < N-bit integers.

    With 4-bit integers: there are 8 4-bit non-negative integers (8→15) and 8 non-negative integers with fewer than 4 bits (0→7).

    So by avoiding values with the high bit set, at best you've doubled the brute-forcer's work.

    The attacks to be worried about are not strictly brute-force attacks, of course, and valid RSA public keys are not evenly distributed across all non-negative integers. But it's not clear to me that this is much of a win.

  2. What if using a non-standard key size singles your keys out for special attention? 🙂

    • That’s why I need to get you all doing the same 🙂

      /Simon

      • And then those sizes become semi-standard and the premise of using “non-standard” sizes no longer applies. You can’t have it both ways.

  3. If your threat model includes an organisation which can afford the resources required to crack a ~4000-bit RSA key, then you fighting the wrong battle.

    Such an organisation – state-level actor, e.g. NSA – has already infected you via zero days in the software you run (Dirty COW, etc), persisted those infections (via modifications to motherboard or HDD/SSD firmware), can interdict any hardware you seek to buy online, has the skills to break into your home/office/etc undetected to fit sniffing devices, has access to classified research about TEMPEST…

    If the NSA is your threat model and you are not a state-level actor (e.g. another government), then you have probably picked the wrong battle. 🙁

    • Eventually attacks become public, and then there is a chance that I might be slightly safer because of my approach. Given the cost is so small, I’m happy to pay it to hedge against that risk.

      If the NSA wants my key, the XKCD posted in the next comment is more appropriate 🙂

      /Simon

  4. There’s another element to your argument, which has some practical salience based on recent developments (e.g. the LogJam attacks). If an attacker needs to do a bunch of pre-computation to attack keys of a given size, having an unusual size means that they would have to go to special effort just to hit your key.

    More broadly, that suggests that people shouldn’t be recommended to use a key of a fixed size, but rather one that’s at least their minimum target (e.g. 2048) plus some random additional bits within a range that doesn’t create too much extra work to use it (e.g. up to 2504). That would create a broader impediment to attacks requiring precomputation or size-specialized hardware/algorithms, because no one precise size would be predominant.

    • That is a good point. It is not strictly covered by what I wrote, so it really should be part of the argument. It seems likely that most attacks in realistic settings will have a huge pre-computation step to speed it up. Using an unusual key sizes could potentially help a little here. Probably not by a significant factor, but increasing it a factor of twice or five times as difficult could be worth the small price to pay for using an unusual key size.

      Thanks,
      /Simon

  5. Pingback: Why I don’t Use 2048 or 4096 RSA Key Sizes https://blog.josefsson.o… | Dr. Roy Schestowitz (罗伊)

  6. Is there a difference between a 2000-bit key and a 2048-bit key beginning with 48 zero bits?

  7. You might have missed a major disadvantage: not only a key cracker might be faster on standard size but also our implementations doing the de/encryption. If lets say 3333 is as slow as 4096, 3333 would be a really bad choice. Did you do the benchmark?

    • Hello. Indeed benchmarks would be useful. I discussed the performance penalty in my writeup. RSA signature verification is the same (very quick), only RSA signature creation is affected, and yes, it will be slower. Still, I haven’t noticed that it takes any noticeable amount of time anyway. Server-side performance matters for heavy servers, I’m sure, but then you really want Ed25519 or ECDSA instead of RSA anyway.

      /Simon

  8. DJB also mildly likes the NIST P-512 curve. It’s likely safe to use.

    http://blog.cr.yp.to/20140323-ecdsa.html

    “To be fair I should mention that there’s one standard NIST curve using a nice prime, namely 2^521 – 1; but the sheer size of this prime makes it much slower than NIST P-256.”

    It’s this one:
    $ openssl ecparam -list_curves
    secp521r1 : NIST/SECG curve over a 521 bit prime field

    Add the following to your x509 certificate to force the P-521 curve:

    $ openssl ecparam -name secp521r1
    —–BEGIN EC PARAMETERS—–
    blahblah
    —–END EC PARAMETERS—–

  9. NIST also gives an AES-equivalent strength formula on page 92 of this document (if you are mandated top-secret, then you need at least AES192):

    http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf

    $ cat keysize-NIST.bc
    #!/usr/bin/bc -l
    l = read()
    scale = 14; a = 1/3; b = 2/3; t = l * l(2); m = l(t) # a^b == e(l(a) * b)
    n = e( l(m) * b ); o = e( l(t) * a ); p = (1.923 * o * n – 4.69) / l(2)
    print “Strength: “, p, “\n”

    $ echo 2868 | ./keysize-NIST.bc
    Strength: 128.01675571278223
    $ echo 7295 | ./keysize-NIST.bc
    Strength: 192.00346260354399
    $ echo 14446 | ./keysize-NIST.bc
    Strength: 256.00032964845911

    $ echo 2048 | ./keysize-NIST.bc
    Strength: 110.11760837749330
    $ echo 2127 | ./keysize-NIST.bc
    Strength: 112.01273358822347

  10. You could argue, that with the common key sizes, the code used to generate a key with those parameters been reviewed by more individuals, lowering the chance of a bug in the implementation generating a completely insecure key.

    • It is a valid concern, however if you read code for how RSA key generation works, it is the same code for all key lengths in most places. You generate random numbers of the appropriate size, and test them if they are primes (typically miller-rabin). RSA is not like elliptic curves where you almost have one optimized implementation for each parameter.

      /Simon

  11. Do you have any concerns about the quality of implementation in endpoints that support non-PoT key sizes? If you end up in a fallback path of sorts, I’m fully expecting it to be bitrotted and less audited.

    • Hi Lars. Your concern appears similar to the previous concern about RSA key generation for non-PoT key sizes. It is a valid concern, however I suspect it is brought on by historical problems with various ECDSA implementation where some curves indeed trigger special code, which has seen less scrutiny than the commonly used curves. I don’t see this as nearly as a big risk for RSA. The endpoints do RSA verification. This is an extremely simple and fast operation, much faster than ECDSA verification. The math and implementations are the same regardless of key size. I’ve sometimes seen implementations that have two RSA implementations, one for “small keys” and one for “large keys”, but this has been for hardware rather than software, and the reasons are probably that they already had a trusted implementation for 1024/2048 keys, and then added a new one for 4096 instead of rewriting everything.

      This is a good aspect, that I didn’t cover, so for any complete writeup of my argument a discussion and analysis of this topic should be present.

      Thanks,
      /Simon

  12. This is an interesting topic, even though the article is written in a bit speculative way.

  13. Your blog title is “Why I don’t Use 2048 or 4096 RSA Key Sizes” but your blog uses 2048. Also I don’t understand why to use non standard size because everyone can see which size your site is using. So what is the point to use 2058 instead of 2048?

    • Hi Jooseppi! My blog uses a 2736 bit key size RSA key. Indeed, everyone will be able to see what public key size I am using. The public key is public after all, and my argument doesn’t involve hiding anything. I tried to make the point of using a non-standard key size clear in the post, see especially the wrap-up in the final paragraph.

      /Simon

  14. Pingback: Planning for a new OpenPGP key – Simon Josefsson's blog

  15. Pingback: OpenPGP master key on Nitrokey Start – Simon Josefsson's blog