diff --git a/_posts/Lecture Notes/Modern Cryptography/2023-09-07-otp-stream-cipher-prgs.md b/_posts/Lecture Notes/Modern Cryptography/2023-09-07-otp-stream-cipher-prgs.md index ea14680..3aedf35 100644 --- a/_posts/Lecture Notes/Modern Cryptography/2023-09-07-otp-stream-cipher-prgs.md +++ b/_posts/Lecture Notes/Modern Cryptography/2023-09-07-otp-stream-cipher-prgs.md @@ -171,6 +171,8 @@ Since the adversary can see the ciphertext, this kind of relation leaks some inf Also, the key is (at least) as long as the message. This is why OTP is rarely used today. When sending a long message, two parties must communicate a very long key that is as long as the message, *every single time*! This makes it hard to manage the key. +## Shannon's Theorem + So is there a way to reduce the key size without losing perfect secrecy? Sadly, no. In fact, the key space must be as least as large as the message space. This is a requirement for perfectly secret schemes. > **Theorem**. If $(G, E, D)$ is a perfectly secret encryption scheme, then $\lvert \mathcal{K} \rvert \geq \lvert \mathcal{M} \rvert$. @@ -290,7 +292,7 @@ We can deduce that if a PRG is predictable, then it is insecure. *Proof*. Let $\mathcal{A}$ be an efficient adversary (next bit predictor) that predicts $G$. Suppose that $i$ is the index chosen by $\mathcal{A}$. With $\mathcal{A}$, we construct a statistical test $\mathcal{B}$ such that $\mathrm{Adv}_\mathrm{PRG}[\mathcal{B}, G]$ is non-negligible. -![mc-01-prg-game.png](../../../assets/img/posts/Lecture%20Notes/Modern%20Cryptography/mc-01-prg-game.png) +![mc-01-prg-game.png](../../../assets/img/posts/Lecture%20Notes/Modern%20Cryptography/mc-01-prg-game.png#) 1. The challenger PRG will send a bit string $x$ to $\mathcal{B}$. - In experiment $0$, PRG gives pseudorandom string $G(k)$. @@ -316,7 +318,7 @@ The theorem implies that if next bit predictors cannot distinguish $G$ from true To motivate the definition of semantic security, we consider a **security game framework** (attack game) between a **challenger** (ex. the creator of some cryptographic scheme) and an **adversary** $\mathcal{A}$ (ex. attacker of the scheme). -![mc-01-ss.png](../../../assets/img/posts/Lecture%20Notes/Modern%20Cryptography/mc-01-ss.png) +![mc-01-ss.png](../../../assets/img/posts/Lecture%20Notes/Modern%20Cryptography/mc-01-ss.png#) > **Definition.** Let $\mathcal{E} = (G, E, D)$ be a cipher defined over $(\mathcal{K}, \mathcal{M}, \mathcal{C})$. For a given adversary $\mathcal{A}$, we define two experiments $0$ and $1$. For $b \in \lbrace 0, 1 \rbrace$, define experiment $b$ as follows: >