Oregon Curriculum Network
Discovering Math with Python

Chapter 5: PUBLIC KEY CRYPTOGRAPHY

What we discuss below is a recent breakthrough in cryptography. When this cryptosystem first appeared in the UK at GCHQ, it was kept secret, because the implications had not been worked out.

However with the emergence of the web and the need for businesses to securely transact as strangers, a public key crytosystem became very appealing. Exchanging secret keys like we used to would not be nearly as convenient. We needed public key crypto to transact business over the web.

The MIT team with initials R, S, A (Rivest, Shamir, and Adleman) managed to keep the USG (the NSA in particular) from clamping down on their patent, since expired. The history is pretty fascinating. Don't forget to read up on PGP (Pretty Good Privacy).

Another PK algorithm is Diffie-Hellman, which has it's own way of letting Bob and Alice exchange information in the clear that will convert to a symmetric secret by some process opaque to Eve. A modern browser will often offer a contract starting with DH as a public exchange, followed by AES as the symmetric key source of payload.

The thing to remember is how a public process becomes private through the TLS phase of the relationship, one of handshaking. The client and server agree to enter in on some shared secrets, for the purposes of keeping the transaction private to 3rd parties. Of course what happens to the encrypted information on either end, i.e. what Alice and/or Bob does with it, is not included in these models.

The explanation of RSA below is yet another illustration of how "clock arithmetic" lets us work with the infinite perumtation of symbols we call plaintext, and yet do invertible things with it, in conjuction with indecipherable keys.

Bob's RSA machine knows any incoming encrypted with Bob's public key, is going to need that key's secret twin, not public, to unlock the payload. Alice never needed Bob's secret, only his public number. The way that public and private pair were produced in the first place is what makes RSA still secure. Throwing computer power at a public key, in an attempt to reverse engineer the corresponding private one, is so far proving fruitless, or at least such is the educated opinion of cryptography experts.

Euler's Theorem

We're now in a position to test Euler's Theorem, which is going to help us understand an important topic: public key cryptography. We want to know what it means and how it's used in RSA, without offering a proof at this juncture.

We saw with our permutation class (which still needs a __pow__ method by the way) how Bob and Alice might use what's called symmetric key cryptography. This means Alice needs the same secret key Bob used, to encrypt a message, to decrypt it. They might use the Advanced Encryption Standard at this point, with any of three lengths of key.

AES one a global contest in the 1990s held by NIST, with Rijndeal considered the winner and later renamed.

In public key cryptography, Bob and Alice both publish a public number, which in the RSA system is a really large number with only two prime factors, call it Bob_N and Alice_N.

When Bob sends a message to Alice, he uses her public key, Alice_N, to encrypt it and only Alice has a corresponding secret key to get the message back. Not even Bob can decrypt his own message.

Let's begin by understanding Euler's Theorem, which says:

$ a^{\varphi(n)} \equiv 1 \mod n$ where $\varphi$ is the totient function, and where a and n are relatively prime (co-prime, strangers).

That $\varphi(n)$ refers to the totient function first introduced in the previous chapter. Euler used the Greek letter phi for it. It tells us the number of totatives (coprimes) for a given number.

Lets get those two functions side by side, the one for finding totatives, and the other for telling us how many.


In [5]:
import math

def totatives(n : int) -> list:
    """get co-primes to n between 0 and n"""
    return [totative for totative in range(n) if math.gcd(totative, n) == 1]

def totient(n):
    """how many totatives have we?"""
    return len(totatives(n))
    
print("Totient of 12:", totient(12))
print("Totient of 100:", totient(100))


Totient of 12: 4
Totient of 100: 40

Below is a single test of Euler's Theorem. $\varphi(100)$ == 40, as we've learned. So any number coprime to 100, raised to the 40th power modulo 100, should be 1.


In [6]:
pow(93, totient(100), 100)  # pow takes a modulus as an optional 3rd argument


Out[6]:
1

Likewise, the code cell below should always return 1, no matter how many times we run it, because we start with a co-prime of 100, satisfying the condition of Euler's Theorem. The totient of 100 is fixed at 40.

The choice function (imported from the random module) is choosing a coprime at random.


In [7]:
from random import choice
coprime = choice(totatives(100)) # choose any one of 100's totatives (always coprime)
pow(coprime, 40, 100)  # raise that totative to the totient power, modulo 100


Out[7]:
1

If you run the above code cell over and over, you'll always get the same answer: 1.

That's what Euler's Theorem asserts.

Lets test it for every totative of 100, why not?


In [8]:
# convert to str for horizontal display
str([pow(coprime, 40, 100) for coprime in totatives(100)])


Out[8]:
'[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]'

Yep, we get 1 every time! Thank you Euler.

Fermat's Little Theorem is in the same ballpark. Check it out. If N is a prime number, then totient(N) = N - 1.

$a^{p-1} \equiv 1 \mod(p)$ were gcd(a, p) == 1 (Fermat's Little Theorem)

Euler's Theorem will prove useful to us, as if we go one power higher, we effectively raise our message to the first power, which is to effectively leave m unchanged.


In [9]:
from random import choice
coprime = choice(totatives(100)) # choose any one of 100's totatives (always coprime)
result = pow(coprime, 40 + 1, 100)  # one power higher than before
print(coprime, result)  # should always be the same number


13 13

If our totative was some secret, raising it to a power modulo some public key N would encrypt it. To get our totative back, we'll need to raise it to an even higher power, effectively one higher than the totient power.

$(a^{\varphi(n)})^k \cdot a \equiv 1^ka \equiv a \mod{n}$

RSA in a Nutshell

The basic idea of RSA is raising a message m to some power e, modulo N, turns it into an encrypted message c.

You might think we would have a way of going backward. If e is public, say 3 or 7, why not just take the 3rd or 7th root of c, modulo whatever N? So far, no one has published a way to do that once N is large enough. Raising m to the eth power is what we call a trap door function meaning an inverse function has yet to be discovered, despite best efforts.

Some special preparation is needed first, to randomly pad message m, a kind of seeding, otherwise some clever hacks may crack into the Bob-Alice transmission, allowing Eve to listen in.

Think of the message going part way around a circle when we raise it to the eth power. We want it to go the rest of the way around the circle so that it pops out again, intact. Given Euler's Theorem, one more power beyond the totient of N power will do the trick.

We need the inverse of c modulo that totient, in other words. We've looked at code for that already, but will soon need a different method, given the large size of N.

In other words, raising anything to the 1st power is equivalent to the original. A zeroth power gives the multiplicative identity, typically. A**0 == One, whatever One means in that namespace. Raising to the totient power, modulo N, is like raising to the 0th power.

A**1 == A, in contrast.

$A^0 A = A^1 = A$.

Any power equivalent to 1, modulo totient(N), is our goal (thanks to Euler's Theorem), meaning that to decrypt c, we need some d such that (e * d) mod totient(N) == 1. That will be like raising our message to the first power, not changing it at all.

$(m^e)^d = m^{ed} = m^{k \varphi(n) + 1}\equiv m \mod{n}$

One notch beyond the totient power, or any multiple thereof, going around in our circle, is the original m, where we started the whole process by raising m by e.

So we need e's inverse, modulo totient(N). e will need to be coprime to totient(N) to ensure d exists. What makes RSA hard to crack is precisely the difficulty in determining d, even where (N, e) are public. As we'll see below, obtaining d requires knowing N's two prime factors.

The message m will pop right back out at exponent position (e * d), having gone all the way around the circle.

Alice_d is what only Alice has, remember, when Bob uses Alice_N and Alice_e to make c, and send it, fairly secure in knowing that even if Eve gets the bits, she won't have a way to unscramble them (unless she's secretly the same person as Alice, but then mathematics is no protection against some forms of deception).

Here's the complete round-trip process, from Alice to Bob or Bob to Alice:


In [10]:
from groups import M
N = 3 * 47                  # generate a new N from scratch, per web browser session
m = M(42, N)                # 42 is original message, N a product of 2 primes
e = 7                       # raise to power, which others take as a signal how far to go
c = m ** e                  # encrypted
d = ~M(e, totient(N))       # inverse of e mod totient(3 * 47), kept private
pow(c, d.val)               # getting our message, using secret d


Out[10]:
(42 mod 141)

42 is not a very exciting message to be sending to Alice however, plus our N is ridiculously small, very easy to factor. What makes RSA hard to crack is we need those two prime factors of N to compute both its totient (as counting totatives won't do) and to thereby compute e's inverse i.e. d, the power we need to raise c by (mod N) to get our m back (the original message).

If you recall Chapter 4, our method for finding the inverse of an M-number involved using brute force. We simply go through all the totatives until we find the right one.

Where huge Ns are involved, this technique is impractical. Is our whole cryptosystem just a pipe dream then? Assembling all the puzzle pieces to make RSA a reality was not easy. Much wine was consumed in the process.

Extended Euclidean Algorithm

What comes to the rescue is called the Extended Euclidean Algorithm (EEA). Lets look at that below. It's purpose is to find two numbers, m and n, such that ma + nb gives the gcd of a and b. This is always possible, even if the gcd is 1.


In [11]:
def xgcd(a,b):
    """
    Extended Euclidean Algorithm (EEA)
    
    Returns (m,n,gcd) such that:  m*a + n*b = gcd(a,b)
    """
    g,u,v = [b,a],[1,0],[0,1]
    while g[1] != 0:
        y = g[0] // g[1]
        g[0],g[1] = g[1],g[0] % g[1]
        u[0],u[1] = u[1],u[0] - y*u[1]
        v[0],v[1] = v[1],v[0] - y*v[1]
    m = v[0]%b
    gcd = (m*a)%b
    n = (gcd - m*a)/b
    return (m,n,gcd)

For example, we know 97 and 100 are coprime (strangers), so what m and n will work in the above equation?


In [12]:
m, n, gcd = xgcd(97, 100)
print("m: ", m)
print("n: ", n)
print(m*97 + n*100)


m:  33
n:  -32.0
1.0

That's another way of saying M(97, 100) has an inverse of M(33, 100), since once we subtract away the 100s (however many), we're left with 1.

Lets check that with our "brute force" method from the last chapter:


In [13]:
from groups import M
def inverse(n : M, elems : set) -> M:
    for m in elems:
        if (n * m).val == 1:
            return m

a = M(97, 100)
elems = {M(x, 100) for x in totatives(100)}  # set comprehension
m = inverse(a, elems)
m


Out[13]:
(33 mod 100)

To make these computations a little more clear, lets define inverse to take our coprime and modulus, and return the multiplicative inverse of our coprime. Remember we were using brute force to find the inverse previously, knowing that every coprime has its inverse. Now we have a way of finding this inverse without trying billions of candidates.


In [14]:
def inverse(a, N):
    """
    If gcd(a,b)=1, then (inverse(a, b) * a) mod b = 1,
    otherwise, if gcd(a,b)!=1, return 0

    Useful in RSA encryption, for finding d such that
    e*d mod totient(n) == 1
    """
    m, n, gcd = xgcd(a, N) # is inverse of a mod N
    return (gcd==1) * m

print(inverse(7, 141))


121

Spending some quality time with both Euclid's Method (EA) for finding the gcd, and the above more elaborate method (EEA) is always good practice.

Their coding has gotten very clever over the years, as coding languages have evolved and become more expressive.

However these methods have their historical roots in the more distant past, before people programmed in any computer language.


In [15]:
M(7, 141) * M(121, 141)


Out[15]:
(1 mod 141)

Another important fact from Number Theory also comes to our rescue. What is the totient of a number? Counting totatives will take forever, practically, once N is large enough. However, if N is comprised of two primes, p, q, then it will have (p-1)(q-1) totatives. We can test this:


In [16]:
N = 47 * 19
(47-1)*(19 - 1) == totient(N)  # defined above


Out[16]:
True

Why is this true? 47 has 46 totatives, 1 through 46, and 19 has 18 totatives. Because they're prime numbers. Nothing less divides into them evenly (with no remainder) by definition, otherwise we would say they have factors, are composite.

Their product will have like a multiplication table of these two sets, imagine 1-46 and 1-18 across the top and down the side, like a pandas Dataframe (see Chapter 1). All those products make up the newer, bigger list of totatives, of the newer bigger product number of only two factors.

Below is a published RSA-number from Wikipedia, already factored for us. If N were that easy to crack open, with a quick lookup in Wikipedia, RSA would not still serve as a first line of defense against unauthorized Eves.

The RSA company held contests to encourage people to try cracking them, and some of them did crack, leading to the practice of yet longer numbers, which take exponentially longer to crack according to current theories. The discovery of some algorithm to factor giant numbers in little time, would revolutionize our vista, as far as cryptosystems go.

An Example

Here's that RSA number, named RSA-210:


In [17]:
p = 435958568325940791799951965387214406385470910265220196318705482144\
524085345275999740244625255428455944579
q = 5625457617268841037562770073044474\
81743876944007510545104946851094548396577479473472146228550799322939273

N = RSA210 =  p * q
t = (p-1)*(q-1)
d = ~M(3, t)     # need a d, such that 3*d mod t == 1
(3 * d.val) % t  # confirm our ability to find inverses with EEA


Out[17]:
1

In [18]:
from binascii import hexlify, unhexlify

phrase = hexlify(b"Able was i ere i saw Elba")
phrase


Out[18]:
b'41626c652077617320692065726520692073617720456c6261'

In [19]:
m = int(phrase, 16)
m


Out[19]:
410424947989148738454394815364866651016047497091321737011809

In [20]:
m = M(m, RSA210)
c = m ** 3                     # encrypted
c


Out[20]:
(69135523462041136185714565774079977390384666301915729148436060818425045084502570834616401113809997102784986010540370435543222688944984776423588149291933710092622905546285285348129 mod 245246644900278211976517663573088018467026787678332759743414451715061600830038587216952208399332071549103626827191679864079776723243005600592035631246561218465817904100131859299619933817012149335034875870551067)

In [21]:
result = pow(c.val, d.val, RSA210)
result


Out[21]:
410424947989148738454394815364866651016047497091321737011809

In [22]:
unhexlify(hex(result)[2:])  # drop the leading 0x from output of hex()


Out[22]:
b'Able was i ere i saw Elba'

We've nearly reached the end of our explication of RSA.

However there's a last puzzle piece to consider, once our goal is to create a composite from large primes: where do those large primes come from?

It's easy enough to string digits together, randomly, but how do we know if it's prime or not? We enter the realm of primality testing.

The Miller-Rabin test considers candidate large numbers for testing with a bunch of bases, and running some special tests that composites rarely pass.

Primes are pretty common, and don't run out. The distance between consecutive primes is another subject of study.

The idea behind Miller-Rabin is that of a filter. If, after enough "bullets" have been fired, it doesn't yield, we call it "bullet proof" and good enough for our purposes.

More theoretical work goes here, to prove that even very probable primes that aren't truly primes, for p & q, will do the job.


In [23]:
from primes import bigppr  # big probable prime algorithm
bigppr()


Working...
Percent chance of being prime: 100.0
Elapsed time: 0.0678099999999997 seconds
Out[23]:
68582413506658043392755169097404112270636329354652253944304096375283344618536438434495286666496598661

So much for bare bones RSA. If you want to develop a trully robust cryptosystem from scratch then do more reading outside the confines of this particular Notebook. For example, 3 is probably not the best exponent to use at the outset.

You might be thinking all this coding from scratch is too much work and surely there's some 3rd party assets that make RSA a lot easier to work with. You'd be right. We did all this coding from scratch to gain deeper insights into the mathematics, however pyCrypto by Dwayne Litzenberger will do the job.

Let's check it out:


In [24]:
from Crypto.PublicKey import RSA
key = RSA.generate( 2048 )  # that's 2048 bits
key


Out[24]:
<_RSAobj @0x109b2a748 n(2048),e,d,p,q,u,private>

The key object has everything you need to run a session. We expect p * q to equal n. Lets try:


In [25]:
key.p * key.q == key.n


Out[25]:
True

Remember, your sender, Bob or Alice, needs to know what e you used, as that's what they need to raise theirs to. (e, N) is more properly speaking your full public key. Your d (for decrypt) is what you can't let out. That's another vulnerability.

In other words, no warranty is expressed nor implied with regard to taking this code and using it within some mission-critical system. That being said, have fun with it.

Also, despair not that your delving into Group and Number Theory and so on has been solely for your understanding of RSA. The AES algorithm, for example, likewise depends on the properties of fields and groups, mainly invertability, the ability to run in backward, and yet with enough entropy added to make the precise key needed (no other key could be "close enough").

In AES, we used a group of 256 binary polynomials up to degree 7, all multiplying modulo some 8th degree modulus. Replacing the byte signatures of such polynomials (how many degrees of what, no worries about coefficients, either 1 or 0), with the byte signature of its inverse, is the beginning of our mixing process.

Another reason to remind of AES at this juncture is to wrap this all up as the standard TLS process, the means whereby average web browsers keep private stuff private.

Back to Chapter 4: Clock Arithmetic
Continue to Chapter 6: Vectors in Space
Introduction / Table of Contents