Why point compression shouldn't scare you
Posted on by Matthias Valvekens
Summary and motivation
The most widely used method to represent ECC public keys represents the key by directly encoding the coordinates of the public key point as-is. However, there’s an alternative method that not only uses less data, but also comes with a number of security benefits (almost) for free.
The “compressed point” convention was encumbered by (dubious) patents for the better part of the past two decades, which has unfortunately severely limited widespread adoption of these ideas. Having said that, the last of these patents expired several years ago, so now we’re definitely free to use point compression as we please.
Below, I’ll attempt to argue why you should be using point compression whenever you can. Spoiler alert: it has very little to do with bandwidth optimisation or performance gains.
An ECC public/private key pair consists of the following data:
- a previously agreed-upon elliptic curve over some (finite) field ;
- a base point that generates a large subgroup of of prime order;
- a public point (often denoted by );
- a private number (often denoted by ).
The public and private parts are related by . As such, the security of ECC hinges on the difficulty of solving the discrete logarithm problem in the subgroup generated by .
For the purposes of this post, we’ll assume that for some prime . The techniques we discuss apply to more general fields as well, but that comes at the cost of some additional complexity. Besides, prime fields cover the vast majority of ECC applications in common use. If you want to know how the scheme works in characteristic 2, have a look at SEC 1, secs. 2.3.3, 2.3.4. Most of the time, we’ll also assume that generates all of (or, in other words, that the cofactor is one). I’ll point out when that is relevant. Throughout, we identify with the ring of residue classes .
Since elliptic curves live in the plane , the public point is given by two coordinates , each of which is a number in . The most straightforward (and still most common) way to represent the public key is by serialising both of these. However, this representation actually has a lot of redundancy. We’ll explore why that is the case in the next section.
Point compression over prime fields in a nutshell
The encoding procedure
Let’s assume that our elliptic curve is given by the equation . Here, and are elements in . Note that such a rewriting is always possible if is a prime greater than .
Suppose now that is a point on the curve that is not the point at infinity. Then and are related as follows: . In other words, must be a square root of in . Recall that square roots in fields are unique up to sign.1 In particular, it follows that to specify , we only need and the parity2 of .
In practice curve points are encoded in compressed form as follows.
- Set to the bit length of , and allocate an output buffer of length .
- Let be a point on that is not the point at infinity. We identify and with their integer representatives in .
- If is even, set the first byte of the result to
0x02. If it is odd, set it to
- Encode into bytes in big-endian form, and fill the rest of the output buffer with the resulting bytes.
That’s all very straightforward. The decompression procedure is slightly more interesting.
Decoding compressed points
As we noted in the previous paragraph, recovering the -coordinate of a curve point given the -coordinate and parity bit amounts to taking a square root in . However, in real-world data, it may happen that the supplied coordinate is such that is not a quadratic residue in ! That means that cannot possibly be the -coordinate of any curve point, and hence the input must be rejected.
After taking that into account, the decoding procedure is not hard to describe.
- Verify that the input starts with
0x03. If not, reject the input.3
- Introduce an auxiliary variable , the value of which is if the first byte of the input is
0x02, and otherwise.
- Decode the remaining bytes of the input into an integer , using big-endian encoding. If , reject the input.
- Attempt to compute a square root of modulo , and denote the result by . If no square root is found, reject the input.
- If , then output . Otherwise output .
Observe that, by construction, and are always valid curve points.
The decoding procedure already illustrates one of the fundamental advantages of using compressed points: provided that the modular arithmetic is implemented correctly, it’s impossible to accidentally admit points that don’t lie on the curve!
Many real-world implementations of ECC-based algorithms don’t bother to verify whether the coordinates they ingest actually correspond to valid curve points. This is very problematic, and a source of dangerous vulnerabilities; see e.g. BMM00. Using point compression gets around this issue by forcing the implementation to compute one of the coordinates using the curve parameters, ensuring that the output is always part of the relevant curve by construction.
Of course, in order for a point to be a valid public key, it must additionally lie in the subgroup of generated by the chosen base point . As it happens, this is one of the reasons why many ECC standards choose their curve parameters such that generates the entire group: it absolves implementations of the duty to validate whether or not a point lands in the correct subgroup of .4 The security impact of this kind of checks can be subtle; see e.g. BCJZ20, secs. 4.1.3, 5.4 for an analysis specific to Ed25519.5
Let’s summarise the main points.
- Any potentially relevant patents on point compression expired years ago.
- Point compression and decompression is not hard to implement.
- Using point compression gets you some extra security, basically for free.
So, if you’re in a position where you don’t have to worry about legacy systems that don’t understand compressed points: please use them!
While RFC 5480 allows both compressed and uncompressed points in certificate public key info data, support for compressed points isn’t a requirement of the specification—presumably because of perceived patent issues. As such, if you’re dealing with legacy clients or public workflows, you might not be able to get away with using compressed points everywhere…
In case you don’t remember why this is the case, here’s a quick proof. Let be a field, an arbitrary element and suppose that . Then . Since fields do not have zero divisors, it follows that either or .↩︎
There’s some abuse of language here: what we’re really asking for is the parity of the unique integer representative in the residue class of such that . This makes sense, because , so is the unique such representative of , and has opposite parity.↩︎
For some algorithms that work with curve parameters where the cofactor is nontrivial, there exist more sophisticated point compression procedures that are also capable of guaranteeing that the output still lands in the correct subgroup. Ristretto255 falls into this category.↩︎
Strictly speaking, the point compression scheme used in (standard) Ed25519 implementations is slightly different from the one discussed here, but the broader point stands.↩︎