the crypto_sign/ed25519 subdirectory of the SUPERCOP benchmarking tool, starting in version 20110629. This software will also be integrated into the next release of the Networking and Cryptography library (NaCl). The Ed25519 software consists of three separate implementations, all providing the same interface: • amd64-51-30k. Assembly-language implementation for the amd64 architecture, using radix 2^51 and a 30KB precomputed table. • amd64-64-24k. Assembly-language implementation for the amd64 architecture, using radix 2^64 and a 24KB precomputed table. • ref. Slow but relatively simple and portable C implementation. Both SUPERCOP and NaCl automatically select the fastest implementation on each computer. https://github.com/gtank/defcon25_crypto_village
point, group, field, and scalar operations on a curve isomorphic to the twisted Edwards curve defined by -x²+y² = 1 - (121665/121666)x²y² over GF(2^255 - 19). https://github.com/gtank/defcon25_crypto_village
ed25519 in your vaguely C-like language: 1. Download ref10 2. Copy + paste 3. Fix linter errors Theorem: understand one and you can figure out the rest Corollary: understand pieces of many, then combine
breaks down along two categories: 1. Field math. The implementation of basic arithmetic operations (addition, subtraction, multiplication, squaring, inversion, and reduction) on integers in GF(2^255-19) and routines for manipulating field elements. 2. Group logic. The actual elliptic curve part, including point addition, doubling, scalar multiplication, and a variety of coordinate representations and conversion routines.
the prime 2^255 - 19” You could also say “the finite field of characteristic 2^255 - 19” We are implementing multi-precision arithmetic over GF(2^255-19). Some things to keep in mind: • These are 255-bit integers • We don’t want to use a generic bignum • Aiming for both constant-time execution and high performance
19 according to the following criteria: primes as close as possible to a power of 2 save time in field operations (as in, e.g, [9]), with no effect on (conjectured) security level; primes slightly below 32k bits, for some k, allow public keys to be easily transmitted in 32-bit words, with no serious concerns regarding wasted space; k = 8 provides a comfortable security level. I considered the primes 2^255 + 95, 2^255 − 19, 2^255 − 31, 2^254 + 79, 2^253 + 51, and 2^253 + 39, and selected 2^255 − 19 because 19 is smaller than 31, 39, 51, 79, 95. (Bernstein, “Curve25519: new Diffie-Hellman speed records”)
(as opposed to binary or optimal extension fields) We like characteristic primes near powers of two. Specifically, primes of the form 2^k - c are called Crandall primes. When c is small relative to the size of a machine word, this shape allows you to limit carry propagation during multiplications. Plus something about cramming public keys into 32-bit words. It was 2006. IMPORTANT POINT: the choice of prime field and representation are usually driven by clever optimizations. They can be absurdly platform-specific.
much larger than native integers? We choose an efficient radix and decompose the numbers into multiple limbs. So, how can you pack 255 bits? • On 32-bit, use radix 2^32: 8 limbs * 32 bits = 256 bits • On 64-bit, use radix 2^64: 4 limbs * 64 bits = 256 bits These are “uniform” and “saturated” representations, because each limb is the same size and we’re using all of the bits available in each word.
split 255-bit integers into ten 26-bit pieces, rather than nine 29-bit pieces or eight 32-bit pieces? Answer: The coefficients of a polynomial product do not fit into the Pentium M’s fp registers if pieces are too large. The cost of handling larger coefficients outweighs the savings of handling fewer coefficients. The overall time for 29-bit pieces is sufficiently competitive to warrant further investigation, but so far I haven’t been able to save time this way. I’m sure that 32-bit pieces, the most common choice in the literature, are a bad idea. Of course, the same question must be revisited for each CPU. (Bernstein)
the number of bits we “care” about is less than the size of the word. The difference is called headspace. Some implementations also use non-uniform limb schedules. So, how can you pack 255 bits?
limbs * 25.5 bits = 255 “25.5” means a balanced alternating limb schedule of 25/26/25/26/… bits Given that there are 10 pieces, why use radix 2^25.5 rather than, e.g., radix 2^25 or radix 2^26 ? Answer: My ring R contains 2^255 * x^10 − 19, which represents 0 in Z/(2^255 − 19). I will reduce polynomial products modulo 2^255 * x^10 − 19 to eliminate the coefficients of x^10 , x^11 , etc. With radix 2^25 , the coefficient of x^10 could not be eliminated. With radix 2^26 , coefficients would have to be multiplied by 2^5 · 19 rather than just 19, and the results would not fit into an fp register. (Bernstein) Look, it was 2006.
is 64-bit (and usually amd64) Use 5 limbs in uniform radix 2^51: 5 limbs * 51 bits = 255 bits In practice, this bound is loose. Unsaturated wins here because we can do less carry propagation by letting the limbs grow beyond 51 bits between operations.
{ unsigned long long v[5]; } fe25519; // FieldElement represents an element of the field // GF(2^255-19). An element t represents the integer // t[0] + t[1]*2^51 + t[2]*2^102 + t[3]*2^153 + t[4]*2^204.
out[0] = a[0] + b[0] out[1] = a[1] + b[1] out[2] = a[2] + b[2] out[3] = a[3] + b[3] out[4] = a[4] + b[4] } // FeAdd sets out = a + b. Long sequences of additions without // reduction that let coefficients grow larger than 54 bits would // be a problem. “Do not have such sequences of additions”
a - b func FeSub(out, a, b *FieldElement) { var t FieldElement t = *b // Reduce each limb below 2^51 t[1] += t[0] >> 51 t[0] = t[0] & maskLow51Bits t[2] += t[1] >> 51 t[1] = t[1] & maskLow51Bits t[3] += t[2] >> 51 t[2] = t[2] & maskLow51Bits t[4] += t[3] >> 51 t[3] = t[3] & maskLow51Bits t[0] += (t[4] >> 51) * 19 t[4] = t[4] & maskLow51Bits // This is slightly more complicated. // Because we use unsigned coefficients, we // first add a multiple of p and then // subtract. out[0] = (a[0] + 0xFFFFFFFFFFFDA) - t[0] out[1] = (a[1] + 0xFFFFFFFFFFFFE) - t[1] out[2] = (a[2] + 0xFFFFFFFFFFFFE) - t[2] out[3] = (a[3] + 0xFFFFFFFFFFFFE) - t[3] out[4] = (a[4] + 0xFFFFFFFFFFFFE) - t[4] } At this point, it’s going to be hard to fit these on slides: https://github.com/gtank/defcon25_crypto_village
easy to reduce it by a number close to a power of 2. Generally, if n = 2k - c, then 2k ≡ c (mod n). Let n = 7 = 23-1, then 23 ≡ 1 (mod n) To reduce x mod n, first convert x to base 23 by grouping: If x = (10010), then x’ = (10) * 23 + (010) (mod n) x’ = (10) * 1 + (010) (mod n) x’ = (10) + (10) (mod n) Which is the correct answer: 18 ≡ 4 (mod 7) h/t to hdevalence. Full explain & better example on his blog.
Some inputs are multiplied by 2; this is combined with multiplication by 19 where possible. The coefficient reduction after squaring is the same as for multiplication. (Bernstein, Duif, Lange, Schwabe, Yang, “High-speed high-security signatures”) Very similar to multiplication. Not very interesting. The thing to know is that squaring is noticeably cheaper than multiplication. When implementing higher-level operations, you should use FeSquare(x) instead of FeMul(x, x).
little theorem: ap ≡ a (mod p) ap-1 ≡ 1 (mod p) a * ap-2 ≡ 1 (mod p) So inversion mod p is equivalent to raising to the power p - 2. If p = 2^255 - 19, then p - 2 = 2^255 - 21 Code: FeInvert fe.go#L130
limb below 2^51, propagating carries until you reach the top limb carry, which you multiply by 19 and wrap into the bottom limb. // TODO Document why this works. // It's the elaborate comment about r = h-pq etc etc. Code: FeReduce fe.go#L130 Elaborate comment (about 32-bit repr): supercop/ref10/fe_tobytes.c#L10 (general idea is reasoning about progressively tighter bounds)
GF(2255 - 19) Ed25519 is a twisted Edwards curve. This post gives a great overview of what exactly that means: https://moderncrypto.org/mail-archive/curves/2016/000806.html (Mike Hamburg, “Climbing the elliptic learning curve”)
IsOnCurve reports whether the given (x,y) lies on the curve. IsOnCurve(x, y *big.Int) bool // Add returns the sum of (x1,y1) and (x2,y2) Add(x1, y1, x2, y2 *big.Int) (x, y *big.Int) // Double returns 2*(x,y) Double(x1, y1 *big.Int) (x, y *big.Int) // ScalarMult returns k*(Bx,By) where k is in big-endian form. ScalarMult(x1, y1 *big.Int, k []byte) (x, y *big.Int) // ScalarBaseMult returns k*G, where G is the base point of the group // and k is an integer in big-endian form. ScalarBaseMult(k []byte) (x, y *big.Int) } https://golang.org/pkg/crypto/elliptic/#Curve
we are dealing with explicit formulas. Edwards curves give us complete formulas without exceptional failure cases. This makes implementation easy and “safe” Explicit Formulas Database: https://www.hyperelliptic.org/EFD/g1p/auto-twisted.html
are explicitly-named field elements. There are multiple coordinate systems in use. type ProjectiveGroupElement struct { X, Y, Z field.FieldElement } type ExtendedGroupElement struct { X, Y, Z, T field.FieldElement }
X/Z y = Y/Z Affine to projective: set Z = 1 Projective to affine: multiply by 1/Z // Most implementations use this // for improved doubling // efficiency. But... type ProjectiveGroupElement struct { X, Y, Z field.FieldElement }
X/Z y = Y/Z x * y = T/Z Affine to extended: Z=1, T=xy Extended to affine: drop T, clear Z // Used for almost everything else type ExtendedGroupElement struct { X, Y, Z, T field.FieldElement } Hisil-Wong-Carter-Dawson, “Twisted Edwards Curves Revisited”
X/Z y = Y/T Used in mixed-coordinate addition and doubling chains. // I got nothing. They work! type CompletedGroupElement struct { X, Y, Z, T FieldElement } typedef struct { fe25519 x; fe25519 z; fe25519 y; fe25519 t; } ge25519_p1p1;
// IsOnCurve reports whether the given (x,y) lies on the curve. IsOnCurve(x, y *big.Int) bool // Add returns the sum of (x1,y1) and (x2,y2) Add(x1, y1, x2, y2 *big.Int) (x, y *big.Int) // Double returns 2*(x,y) Double(x1, y1 *big.Int) (x, y *big.Int) // ScalarMult returns k*(Bx,By) where k is in big-endian form. ScalarMult(x1, y1 *big.Int, k []byte) (x, y *big.Int) // ScalarBaseMult returns k*G, where G is the base point of the group // and k is an integer in big-endian form. ScalarBaseMult(k []byte) (x, y *big.Int) } https://golang.org/pkg/crypto/elliptic/#Curve
x as a big-endian byte slice. func (x *Int) Bytes() []byte { buf := make([]byte, len(x.abs)*_S) return buf[x.abs.bytes(buf):] } // SetBytes interprets buf as the bytes of a big-endian unsigned // integer, sets z to that value, and returns z. func (z *Int) SetBytes(buf []byte) *Int { z.abs = z.abs.setBytes(buf) z.neg = false return z }
x as a big-endian byte slice. // SetBytes interprets buf as the bytes of a big-endian unsigned // integer, sets z to that value, and returns z. Problem: field element packing is always little-endian
provides raw (unchecked but fast) access to x by returning its // absolute value as a little-endian Word slice. The result and x share // the same underlying array. // Bits is intended to support implementation of missing low-level Int // functionality outside this package; it should be avoided otherwise. func (x *Int) Bits() []Word { return x.abs } Faster than Bytes(), and already little-endian!
whether the given (x,y) lies on the curve. IsOnCurve(x, y *big.Int) bool // Add returns the sum of (x1,y1) and (x2,y2) Add(x1, y1, x2, y2 *big.Int) (x, y *big.Int) // Double returns 2*(x,y) Double(x1, y1 *big.Int) (x, y *big.Int) // ScalarMult returns k*(Bx,By) where k is in big-endian form. ScalarMult(x1, y1 *big.Int, k []byte) (x, y *big.Int) // ScalarBaseMult returns k*G, where G is the base point of the group // and k is an integer in big-endian form. ScalarBaseMult(k []byte) (x, y *big.Int) } https://golang.org/pkg/crypto/elliptic/#Curve
func (curve ed25519Curve) Double(x1, y1 *big.Int) (x, y *big.Int) { var p group.ProjectiveGroupElement p.FromAffine(x1, y1) // Use the special-case DoubleZ1 here because we know Z will be 1. return p.DoubleZ1().ToAffine() } Specific to Go: Two doubling formulas. Affine conversion makes the tradeoff less clear.
The why is genuinely beyond our scope today. Concept overview: Bernstein. “curves, coordinates, and computations” Deeper: Joye, Yen. “The Montgomery Powering Ladder” Even deeper: Costello, Smith. “Montgomery Curves and their Arithmetic”
known ahead of time, can precompute multiples to speed things up. Usually, you only do this for the basepoint of the curve (think: key generation). Adam Langley. “Faster curve25519 with precomputation.” Bernstein, Duif et al. High-speed high-security signatures. Section 4. This probably best explained in code by dalek: dalek/src/curve.rs#L917
does not expose uint128. Two answers: 1. Write assembly; amd64 provides 64-bit widening multipliers 2. Fight the inliner Option 2 is a whole other talk.
multiples two 64-bit numbers and adds them to two accumulators. func mul64x64(lo, hi, a, b uint64) (ol uint64, oh uint64) { t1 := (a>>32)*(b&0xFFFFFFFF) + ((a & 0xFFFFFFFF) * (b & 0xFFFFFFFF) >> 32) t2 := (a&0xFFFFFFFF)*(b>>32) + (t1 & 0xFFFFFFFF) ol = (a * b) + lo cmp := ol < lo oh = hi + (a>>32)*(b>>32) + t1>>32 + t2>>32 + uint64(*(*byte)(unsafe.Pointer(&cmp))) return }
implementations are in radix51/fe_mul_amd64.s and radix51/fe_square_amd64.s. Things to note: 1. Go uses Plan9 assembly! Have fun finding docs. 2. The Go inliner won’t touch assembly functions. So you need to implement the entire field multiplication in asm, not just the 64->128 multiplies. 3. The build flag `noasm` exists.
with writing and benchmarking Go assembly: PeachPy is a tool for writing platform-agnostic assembly and generating output for your target platform. It supports goasm as an output mode. Damian Gryski wrote a tutorial on using it for Go: https://blog.gopheracademy.com/advent-2016/peachpy/ pprof has modes aimed at benchmarking and exploring compiler output.
doesn’t know about them. You end up using BYTE literals, e.g. BYTE $0xC5; BYTE $0xFD; BYTE $0xEF; BYTE $0xC0 // VPXOR ymm0, ymm0, ymm0 A full AVX2 example