-
Notifications
You must be signed in to change notification settings - Fork 13
/
Copy pathlec27-F24.tex
978 lines (717 loc) · 49.8 KB
/
lec27-F24.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
\algrenewcommand\algorithmicfunction{\textbf{Machine}}
\renewcommand{\O}{\ensuremath{\mathcal{O}}}
\renewcommand{\P}{\cclass{P}}
\renewcommand{\Enc}{\mathsf{Enc}}
\renewcommand{\Dec}{\mathsf{Dec}}
\renewcommand{\sk}{\mathsf{sk}}
\chapter{Secure Computation}
\section{Introduction}
Secure multiparty computation considers the problem of different parties
computing a joint function of their separate, private inputs without revealing
any extra information about these inputs than that is leaked by just the result
of the computation. This setting is well motivated, and captures many different
applications. Considering some of these applications will provide intuition
about how security should be defined for secure computation:
\begin{description}
\item[Voting:] Electronic voting can be thought of as a multi party computation
between $n$ players: the voters. Their input is their choice $b \in \{0,1\}$
(we restrict ourselves to the binary choice setting without loss of generality), and the function
they wish to compute is the majority function.
Now consider what happens when only one user votes: their input is trivially
revealed as the output of the computation. What does privacy of inputs mean
in this scenario?
\item[Searchable Encryption:] Searchable encryption schemes allow clients
to store their data with a server, and subsequently grant servers tokens
to conduct specific searches. However, most schemes do not consider access
pattern leakage. This leakage tells the server potentially valuable information
about the underlying plaintext. How do we model all the different kinds
information that is leaked?
\end{description}
From these examples we see that defining security is tricky, with lots of
potential edge cases to consider. We want to ensure that no party can learn
anything more from the secure computation protocol than it can from just its
input and the result of the computation. To formalize this, we adopt the
\textbf{real/ideal paradigm}.
\section{Real/Ideal Paradigm}
\paragraph{Notation.}
Suppose there are $n$ parties, and party $P_i$ has access to some data $x_i$. They are trying to compute some function of their inputs $f(x_1, \dotsc, x_n)$. The goal is to do this securely: even if some parties are corrupted, no one should learn more than is strictly necessitated by the computation.
\paragraph{Real World.} In the real world, the $n$ parties execute a protocol $\Pi$
to compute the function $f$. This protocol can involve multiple rounds of
interaction. %Each party can additionally have some randomness.
The real world adversary $\RealAdv$ can corrupt arbitrarily many (but not all) parties.
\paragraph{Ideal World.} In the ideal world, an angel helps in the
computation of $f$:
each party sends their input to the angel and receives the output of the computation $f(x_1, \dotsc, x_n)$.
Here the ideal world adversary $\IdealAdv$ can again corrupt arbitrarily many (but not all) parties.
To model malicious adversaries, we need to modify the ideal world model as follows.
Some parties are honest, and each honest party $P_i$ simply sends $x_i$ to the angel. The other parties are corrupted and are under control of the adversary $\IdealAdv$. The adversary chooses an input $x_i'$ for each corrupted party $P_i$ (where possibly $x_i' \neq x_i$) and that party then sends $x_i'$ to the angel. The angel computes a function $f$ of the values she receives (for example, if only party 1 is honest, then the angel computes $f(x_1, x_2', x_3', \dotsc, x_n')$) in order to obtain a tuple $(y_1, \dotsc, y_n)$.
She then sends $y_i$ of corrupted parties to the adversary, who gets to decide whether or not honest parties will receive their response from the angel. The angel obliges. Each honest party $P_i$ then outputs $y_i$ if they receive $y_i$ from the angel and $\perp$ otherwise, and corrupted parties output whatever the adversary tells them to.
\paragraph{Definition of Security.}
A protocol $\Pi$ is secure against computationally bounded adversaries if for every PPT adversary $\RealAdv$ in the real world, there exists an PPT adversary $\IdealAdv$ in the ideal world such that for all tuples of bit strings $(x_1, \dotsc, x_n)$, we have
\[ \mathrm{Real}_{\Pi, \RealAdv}(x_1, \dotsc x_n) \stackrel{c}{\simeq} \mathrm{Ideal}_{F,\IdealAdv}(x_1, \dotsc, x_n) \]
where the left-hand side denotes the output distribution induced by $\Pi$ running with $\RealAdv$, and the right-hand side denotes the output distribution induced by running the ideal protocol $F$ with $\IdealAdv$.
The ideal protocol is either the original one described for semi-honest adversaries, or the modified one described for malicious adversaries.
%We require that the views of the parties
%in each of the scenarios be identical, i.e.\ that a real-world execution of the
%protocol $\Pi$ should not leak any information not leaked by the ideal-world
%execution. Hence, the parties can only learn what they can infer from their
%inputs and the output $f(\InputA, \InputB)$. More formally, assuming $\RealAdv$
%corrupts one party (say $\PartyA$, wlog), we define random variables
%$\RealVar_{\Pi, \RealAdv}(\InputA, \InputB) = \RealAdv(\InputA, r_1, \text{messages
%sent in } \Pi)$ and $\IdealVar_{F, \IdealAdv}(\InputA, \InputB) = \IdealAdv(\InputA,
%f(\InputA,\InputB))$. These random variables represent the views of the
%adversary in each of the two settings. Our definition of security thus requires
%that
%
%\begin{equation*}
%\RealVar_{\Pi, \RealAdv}(\InputA, \InputB) \indis \IdealVar_{F, \IdealAdv}(x_1, x_2).
%\end{equation*}
\paragraph{Assumptions.} We have brushed over some details of the above setting.
Below we state these assumptions explicitly:
\begin{enumerate}
\item \textbf{Communication channel:} We assume that the communication channel
between the involved parties is completely insecure, i.e., it does not preserve
the privacy of the messages. However, we assume that it is reliable, which means
that the adversary can drop messages, but if a message is delivered, then
the receiver knows the origin.
\item \textbf{Corruption model:} We have different models of how and when the
adversary can corrupt parties involved in the protocol:
\begin{itemize}
\item
\emph{Static:} The adversary chooses which parties to corrupt before the
protocol execution starts, and during the protocol, the malicious parties
remain fixed.
\item
\emph{Adaptive:} The adversary can corrupt parties dynamically during
the protocol execution, but the simulator can do the same.
\item
\emph{Mobile:} Parties corrupted by the adversary can be ``uncorrupted''
at any time during the protocol execution at the adversary's discretion.
\end{itemize}
\item \textbf{Fairness:} The protocols we consider are not ``fair'', i.e.,
the adversary can cause corrupted parties to abort arbitrarily. This can
mean that one party does not get its share of the output of the computation.
\item \textbf{Bounds on corruption:} In some scenarios, we place upper bounds
on the number of parties that the adversary can corrupt.
\item \textbf{Power of the adversary:} We consider primarily two types of
adversaries:
\begin{itemize}
\item \emph{Semi-honest adversaries:} Corrupted parties follow the protocol
execution $\Pi$ honestly, but attempt to learn as much information as they
can from the protocol transcript.
\item \emph{Malicious adversaries:} Corrupted parties can deviate arbitrarily
from the protocol $\Pi$.
\end{itemize}
\item \textbf{Standalone vs.\ Multiple execution:} In some settings, protocols
can be executed in isolation; only one instance of a particular protocol
is ever executed at any given time. In other settings, many different protocols
can be executed concurrently. This can compromise security.
\end{enumerate}
\section{Oblivious transfer}
\emph{Rabin's oblivious transfer} sets out to accomplish the following special task of two-party secure computation. The sender has a bit $s \in \{0,1\}$. She places the bit in a box. Then the box reveals the bit to the receiver with probability 1/2, and reveals $\perp$ to the receiver with probability 1/2. The sender cannot know whether the receiver received $s$ or $\perp$, and the receiver cannot have any information about $s$ if they receive $\perp$.
\subsection{1-out-of-2 oblivious transfer}
\emph{1-out-of-2 oblivious transfer} sets out to accomplish the following related task. The sender has two bits $s_0, s_1 \in \{0,1\}$ and the receiver has a bit $c \in \{0,1\}$. The sender places the pair $(s_0, s_1)$ into a box, and the receiver places $c$ into the same box. The box then reveals $s_c$ to the receiver, and reveals $\perp$ to the sender (in order to inform the sender that the receiver has placed his bit $c$ into the box and has been shown $s_c$). The sender cannot learn any information about $c$, so she cannot know anything about which of her bits the receiver received. Also, the receiver cannot know any information about $s_{1-c}$. This exchange can be modeled by a function $f$, such that $f((s_0, s_1), c) = (\perp, s_c)$. The sender sends the input $(s_0, s_1)$ and receives the output $\perp$. The receiver sends the input $c$ and receives the output $s_c$. We will assume that the two parties follow this protocol, and we will later change to a malicious setting.
\begin{lemma}
A system implementing 1-out-of-2 oblivious transfer can be used to implement Rabin's oblivious transfer.
\end{lemma}
\proof
The sender has a bit $s$. She randomly samples a bit $b \in \{0,1\}$ and $r \in \{0,1\}$, and the receiver randomly samples a bit $c \in \{0,1\}$. If $b = 0$, the sender defines $s_0 = s$ and $s_1 = r$, and otherwise, if $b = 1$, she defines $s_0 = r$ and $s_1 = s$. She then places the pair $(s_0, s_1)$ into the 1-out-of-2 oblivious transfer box. The receiver places his bit $c$ into the same box, and then the box reveals $s_c$ to him and $\perp$ to the sender. Notice that if $b = c$, then $s_c = s$, and otherwise $s_c = r$. Once $\perp$ is revealed to the sender, she sends $b$ to the receiver. The recieiver checks whether or not $b = c$. If $b = c$, then he knows that the bit revealed to him was $s$. Otherwise, he knows that the bit revealed to him was the nonsense bit $r$ and he regards it as $\perp$. \\
It is easy to see that this procedure satisfies the security requirements of Rabin's oblivious transfer protocol. Indeed, as we saw above, $s_c = s$ if and only if $b = c$, and since the sender knows $b$, we see that knowledge of whether or not the bit $s_c$ received by the receiver is equal to $s$ is equivalent to knowledge of $c$, and the security requirements of 1-out-of-2 oblivious transfer prevent the sender from knowing $c$. Also, if the receiver receives $r$ (or, equivalently, $\perp$), then knowledge of $s$ is knowledge of the bit that was not revealed to him by the box, which is again prevented by the security requirements of 1-out-of-2 oblivious transfer. $\qed$
\begin{lemma}
A system implementing Rabin's oblivious transfer can be used to implement 1-out-of-2 oblivious transfer.
\end{lemma}
\proofsketch
The sender has two bits $s_0, s_1 \in \{0,1\}$ and the receiver has a single bit $c$. The sender randomly samples $3n$ random bits $x_1, \dotsc, x_{3n} \in \{0,1\}$. Each bit is placed into its own a Rabin oblivious transfer box. The $i$th box then reveals either $x_i$ or else $\perp$ to the receiver. Let
\[ S := \{i \in \{1, \dotsc, 3n\} : \text{the receiver knows } x_i\}. \]
The receiver picks two sets $I_0, I_1 \subseteq \{1, \dotsc, 3n\}$ such that $\# I_0 = \# I_1 = n$, $I_c \subseteq S$ and $I_{1-c} \subseteq \{1, \dotsc, 3n\} \setminus S$. This is possible except with probability negligible in $n$. He then sends the pair $(I_0, I_1)$ to the sender. The sender then computes $t_j= \left(\bigoplus_{i \in I_j}x_i \right) \oplus s_j$ for both $j \in \{0,1\}$ and sends $(t_0, t_1)$ to the receiver. \\
Notice that the receiver can uncover $s_c$ from $t_c$ since he knows $x_i$ for all $i \in I_c$, but cannot uncover $s_{1-c}$. One can show that the security requirement of Rabin's oblivious transfer implies that this system satisfies the security requirement necessary for 1-out-of-2 oblivious transfer. $\qed$ \\
We will see below that length-preserving one-way trapdoor permutations can be used to realize 1-out-of-2 oblivious transfer.
\begin{theorem}
The following protocol realizes 1-out-of-2 oblivious transfer in the presence of computationally bounded and semi-honest adversaries.
\begin{enumerate}
\item The sender, who has two bits $s_0$ and $s_1$, samples a random length-preserving one-way trapdoor permutation $(f, f^{-1})$ and sends $f$ to the receiver. Let $b(\cdot)$ be a hard-core bit for $f$.
\item The receiver, who has a bit $c$, randomly samples an $n$-bit string $x_c \in \{0,1\}^n$ and computes $y_c = f(x_c)$. He then samples another random $n$-bit string $y_{1-c} \in \{0,1\}^n$, and then sends $(y_0, y_1)$ to the sender.
\item The sender computes $x_0 := f^{-1}(y_0)$ and $x_1 := f^{-1}(y_1)$. She computes $b_0 := b(x_0) \oplus s_0$ and $b_1 := b(x_1) \oplus s_1$, and then sends the pair $(b_0, b_1)$ to the receiver.
\item The receiver knows $c$ and $x_c$, and can therefore compute $s_c = b_c \oplus b(x_c)$.
\end{enumerate}
\end{theorem}
\proof
Correctness is clear from the protocol.
For security, from the sender side, since $f$ is a length-preserving permutation, $(y_0, y_1)$ is statistically indistinguishable from two random strings, hence she can't learn anything about $c$.
From the receiver side, guessing $s_{1-c}$ correctly is equivalent to guessing the hard-core bit for $y_{1-c}$.
\qed
\subsection{1-out-of-4 oblivious transfer}
With messages $m_{00},\ m_{01},\ m_{10},$ and $m_{11}$, we describe how to implement a 1-out-of-4 oblivious transfer (OT) using 1-out-of-2 OT:\@
\begin{enumerate}
\item
The sender $\PartyA$ samples 6 random values $(S_0, S_1, S_{00}, S_{01}, S_{10}, S_{11}) \gets \bits^6$. Note that each of these values are sampled uniformly at random so as to not leak any information about the messages.
\item
$\PartyA$ computes
\begin{align*}
\alpha_{00} &= S_0 \xor S_{00} \xor m_{00}\\
\alpha_{01} &= S_0 \xor S_{01} \xor m_{01}\\
\alpha_{10} &= S_1 \xor S_{10} \xor m_{10}\\
\alpha_{11} &= S_1 \xor S_{11} \xor m_{11}
\end{align*}
It sends these values to $\PartyB$.
\item
The parties engage in 3 1-out-of-2 oblivious transfer protocols for the following
messages: $(S_0, S_1)$, $(S_{00}, S_{01})$, $(S_{10}, S_{11})$. The receiver's input for the first OT is the first choice bit, and for the second and third ones is
the second choice bit.
\item
The receiver can only decrypt one ciphertext.
\end{enumerate}
\subsection{Computation for Input Sharing}
Consider $n$ provers $P_i$ for all $i \{1, \dots, n\}$ that are trying to compute shares of $\gamma$. Consider inputs $\alpha = \alpha_1 \xor \dots \xor \alpha_n$ and $\beta = \beta_1 \xor \dots \xor \beta_n$ such that for all $i \in \{1, \dots, n\}$, each prover $P_i$ only has access to $\alpha_i$ and $\beta_i$.
The XOR operation can be accomplished with secret sharing, since $\gamma = \alpha \xor \beta = (\alpha_1 \xor \dots \alpha_n) \xor (\beta_1 \xor \dots \beta_n) = (\alpha_1 \xor \beta_1) \xor (\alpha_2 \xor \beta_2) \xor \dots \xor (\alpha_n \xor \beta_n)$.
The NOT operation can be accomplished with secret sharing, where $\alpha$ is the input and $1 - \alpha$ is the output, since the first party can simply flip a bit by calculating $1 - \alpha_1$.
The AND operation can be accomplished with secret sharing, since $\gamma = \alpha \cdot \beta = (\alpha_1 \xor \dots \xor \alpha_n) \cdot (\beta_1 \xor \dots \xor \beta_n) = \Sigma_i \alpha_i \beta_i + \Sigma_{i,j\ |\ i \ne j} \alpha_i \beta_j$. The first summation because for each $i \in \{1, \dots, n\}$, $P_i$ can calculate $\alpha_i \beta_i$. The second summation with the ``cross terms" is equal to $\Sigma_{1 \leq i < j \leq n}(\alpha_i \beta_j + \alpha_j + \beta_i) = \Sigma_{1 \leq i < j \leq n}$. Note that $z_{ji} = \alpha_i\beta_j + \alpha_j\beta_i + z_{ij}$. To compute $\alpha_i \beta_j + \alpha_j \beta_i$, $P_i$ and $P_j$ need to work together. If $\alpha_j = \beta_i = 0$, then $z_{ji} = z_{ij}$. If $\alpha_j = 0$ an d $\beta_i = 1$, then $z_{ji} = \alpha_i + z_{ij}$. If $\alpha_j = 1$ an d $\beta_i = 0$, then $z_{ji} = \beta_i + z_{ij}$. If $\alpha_j = \beta_i = 1$, then $z_{ji} = \alpha_i + \beta_i + z_{ij}$.
Note that the goal here is not the solve the problem of whether someone does not reveal their results, such as if their computer dies. In fact, it may be unclear what ``solving" would even mean in such a context. For example, in the context of voting, it is unclear whether a ``solve" would mean a tie between candidates or something else.
\section{Yao's Garbled Circuit}
%\input{HWsolution.tex}
%\part{Yao's Garbled Circuit}
% ===========
\section{Setup}
Yao's Garbled Circuits is presented as a solution to Yao's Millionaires' problem,
which asks whether
two millionaires can compete for bragging rights of which is richer
without revealing their wealth to each other.
It started the area of secure computation.
We will present a solution for the two party problem;
it can be extended to a polynomial number of parties,
using the techniques from last lecture.
The solution we saw previously needed an interaction for each AND gate.
Yao's solution requires only one message,
so it provides a constant size of interaction.
We present a solution only for semi honest security.
This can be amplified to malicious security,
but there are more efficient ways of amplifying this than what we saw last lecture.
\subsection{Secure Computation}
Recall our definition of secure computation.
We define ideal and real worlds.
Security is defined to hold if
anything an attacker can achieve in the real world
can also be achieved by an ideal attacker in the ideal world.
We define the ideal world to have the properties that we desire.
For security to hold these properties must also hold in the real world.
\subsection{$(\Garble, \Eval)$}
We will provide a definition, similar to how we define encryption, that allows us avoid dealing with simulators all the time.
Yao's Garbled Circuit is defined as two efficient algorithms $(\Garble, \Eval)$. Let the circuit $C$ have $n$ input wires.
$\Garble$ produces the garbled circuit and two labels for each input wire. The labels are for each of 0 and 1 on that wire and are like encryption keys.
\[
(\tilde{C}, \{\ell_{i,b}\}_{i \in [n], b \in \{0,1\}}) \leftarrow \Garble(1^k, C)
\]
To evaluate the circuit on a single input we must choose a value for each of the n input wires.
Given n of 2n input keys, $\Eval$ can evaluate the circuit on those keys and get the circuit result.
\[
C(x) \leftarrow \Eval(\tilde{C}, \{\ell_{i, x_i}\}_{i \in [n]})
\]
\paragraph{Correctness}
Correctness is as usual, if you garble honestly, evaluation should produce the correct result.
\[
\forall C, x
Pr[ C(x) \ne \Eval(\tilde{C}, \{l_{i, x_i}\})\ |\ (\tilde{C}, \{\ell_{i,b}\}) = \Garble(1^k, C)] = 0
\]
\paragraph{Security}
For security we require that a party receiving
a garbled circuit and n inputs labels
can not computationally distinguish the joint distribution of the circuits and labels
from the distribution produced by
a simulator with access to the circuit and its evaluation on the input that the labels represent.
The simulator does not have access to the actual inputs.
If this holds, the party receiving the garbled circuit and n labels can not determine the inputs.
\begin{align*}
&\exists\ \text{PPT}\ \Sim : \forall C, x\\
&(\tilde{C}, \{\ell_{i,x_i}\}_{i \in [n]}) \simeq \Sim(1^k, C, C(x)) \text{ where} \\
&(\tilde{C}, \{\ell_{i,b}\}_{i \in [n], b \in \{0,1\}}) \leftarrow \Garble(1^k, C)
\end{align*}
For simplicity we pass the circuit to the simulator.
You could also use universal circuits and pass
in with the inputs the specific circuit that the universal circuit should realize. Letting $U$ be the universal circuit such that $U(C, x) = C(x)$, the structure of $U$ does not need to be hidden, just its input $(C, x)$.
\section{Use for Semi-honest two party secure communication}
Alice, with input $x^1$, and Bob, with input $x^2$, have a circuit, C, that they want to evaluate securely.
The size of their combined inputs is n, so $|x^1| = n_1, |x^2| = n - n_1, |x^1| + |x^2| = n$.
They can do this by Alice garbling a circuit and sending input wire labels to Bob, as in Figure \ref{fig:message}.
Alice garbles the circuit and passes it to Bob, $\tilde{C}$.
Alice passes the labels for her input directly to Bob, $\{\ell_{i, x^1_i}\}_{i \in [n] / [n_2]}$.
Alice passes all the labels for Bob's input wires into oblivious transfer, $\{\ell_{i, b_i}\}_{i \in [n] / [n_1], b \in \{0,1\}}$,
from which Bob can retrieve the labels for his actual inputs, $\{\ell_{i, x^2_i}\}_{i \in [n] / [n_1]}$.
Bob now has the garbled circuit and one label for each input wire.
He evaluates the garbled circuit on those garbled inputs and learns $C(x^1||x^2)$.
Bob does not learn anything besides the result as he has only the garbled circuit and n garbled inputs.
Alice does not learn anything as she uses oblivious transfer to give Bob his input labels and receives nothing in reply.
\begin{figure}[htbp]
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(10, 7)(-5, -4)
% \put(-.5,2){\makebox(1,1){C}}
\put(-6,2){\makebox{Alice: $C, x^1$}}
\put(-6,1.3){\makebox{$(\tilde{C}, \{\ell_{i,b}\}) \leftarrow \Garble$}}
\put(4,2){\makebox{Bob: $C, x^2$}}
\put(-1,0){\makebox(2,2){$\underrightarrow{\tilde{C}}$}}
\put(-1,-0.8){\makebox(2,2){$\underrightarrow{ S_{out}^0 \text{ is 0 }, S_{out}^1 \text{ is 1 } }$}}
\put(-1,-2){\makebox(2,2){$\underrightarrow{\{\ell_{i, x^1_i}\}_{i \in [n] / [n_2]}}$}}
% \put(-1,-1){\makebox(2,2){$\underrightarrow{\ell_{i,0}, \, \ell_{i,1} \forall i \in [n]/[n_1] }$}}
\put(-.5,-3){\framebox(1,1){OT}}
\put(-1,-2.8){\line(1,0){.5}}
\put(-1.6,-2.8){\makebox{$\ell_{i,1}$}}
\put(-1,-2.2){\line(1,0){.5}}
\put(-1.6,-2.2){\makebox{$\ell_{i,0}$}}
\put(.5,-2.5){\line(1,0){.5}}
\put(1.2,-2.5){\makebox{$\{\ell_{i, x^2_i}\}_{i \in [n] / [n_1]}$}}
\put(-1,-4.5){\makebox(2,2){$\underrightarrow{ \forall i \in [n]/[n_1] }$}}
\end{picture}
\caption{Messages in Yao's Garbeled Circuit}
\label{fig:message}
\end{center}
\end{figure}
%\paragraph{Malicious Bob}
%Alice semi-honest, and oblivious transfer is maliciously secure.
%Holds against malicious $Bob^*$
% What of deliberate circuit that shows first input
\subsection{Construction of Garbled Circuits}
We would like to garble a circuit such that there are two keys for each input wire.
Correctness should be that
given one of the two keys for each wire we can compute the output for the inputs those keys correspond to.
Security should be that
given one key for each wire you can only learn the output, not the actual inputs.
%---
We build the circuit as a bunch of NAND gates that outputs one bit.
If more bits are required, this can be done multiple times.
NAND gates can create any logic needed.
We define the following sets:
\begin{align*}
W &= \text{the set of wires in the circuit}\\
G &= \text{the set of gates in the circuit.}
\end{align*}
For each wire in the circuit, sample two keys
to label the possible inputs $0$ and $1$ to the wire
\[
\forall w \in W \quad S_w^0, S_w^1 \, \leftarrow{} \{0,1\}^k.
\]
We can think of these as the secret keys to an encryption scheme
(Gen, Enc, Dec).
For such a scheme we can always replace the secret key with the random bits fed into Gen.
\paragraph{Wires}
For each wire in the circuit we will have an invariant that the evaluator can only get one of the wires two encrypted values.
Consider an internal wire fed by the evaluation of a gate. The gate receives two encrypted values as inputs
and produces one encrypted output. The output will be one of the two labels for that wire and the evaluator will have no
way of obtaining the other label for that wire.
For example on wire $w_i$, the evaluator will only learn the value for $1$, $S_{w_i}^1$.
We ensure this for the input wires by giving the evaluator only one of the two encrypted values for the wire.
\paragraph{Gates}
For every gate in the circuit we create four cipher texts.
For each choice of inputs we encrypt the output key under each of the input keys.
Let gate $g$ have inputs $w_1, w_2$ and output $w_3$,
\begin{align*}
e_g^{00} &= \Enc_{S_{w_1}^0} ( \Enc_{S_{w_2}^0} ( S_{w_3}^1, 0^k) )\\
e_g^{01} &= \Enc_{S_{w_1}^0} ( \Enc_{S_{w_2}^1} ( S_{w_3}^1, 0^k) )\\
e_g^{10} &= \Enc_{S_{w_1}^1} ( \Enc_{S_{w_2}^0} ( S_{w_3}^1, 0^k) )\\
e_g^{11} &= \Enc_{S_{w_1}^1} ( \Enc_{S_{w_2}^1} ( S_{w_3}^0, 0^k) ).
\end{align*}
We add $k$ zeros at the end.
\paragraph{Final Output}
For the final output wire, $S_{out}$, we can just give out their values,
\begin{align*}
S_{out}^0 &\text{ corresponds to 0}\\
S_{out}^1 &\text{ corresponds to 1.}
\end{align*}
\paragraph{$\bold{\tilde{C}}$}
For each gate, Alice sends Bob a random permutation of the set of four encrypted output values.
\[
\{e_g^{C_1, C_2} \} \quad \forall g \in G \quad C_1, C_2 \in \{0,1\}.
\]
For each gate, Alice sends Bob a random permutation of the set of four encrypted output values
\paragraph{Evaluation}
With an encrypted gate $g$,
input keys $S_{w_1} \, S_{w_2}$ for the input wires,
and four randomly permuted encryptions of the output keys, $e_g^{a}, e_g^{b}, e_g^{c}, e_g^{d}$,
Bob can evaluate the gate to find the corresponding key $S_{w3}$ for the output wire.
Bob can decrypt each of the encrypted output keys until he finds one that decrypts
to a string ending in the proper number of $0$'s, which is very likely to contain the proper output key.
We can increase the probability of the correct key by increasing the number of $0$'s.
\[
\exists i \in \{a, b, c, d\} : \Dec_{S_{w_2}} ( \Dec_{S_{w_1}} ( e_g^{i} )) = S_{w_3}, 0^k
\]
Given input wire labels
$\{ \ell_{i, x_i} \}_{i \in [n]}$
the complete encrypted circuit $\tilde{C}$ is evaluated by working up from the input gates.
%$l_{i,b} = \{S_{i,b}\}$
%as with PRF encryption scheme
%$Enc(_s(m) = (r, m \oplus F_s(r)$
The evaluator should not be able to infer anything except what they could infer in the ideal world.
As a simple example, if the evaluator supplies one input to a circuit of just one NAND gate,
they would be able to infer the input of the other party. However, this is true is the ideal world as well.
\section{Proof Intuition}
What intuition can we offer that the
distribution of $\tilde{C}$ with one label per input wire
is indistinguishable from what which a simulator could produce with access to the output?
%
For each input wire we are only given one key.
As we are doing double encryption,
for each input gate we only have the keys needed to decrypt one of the four possible outputs.
The other three are protected by semantic security.
%
So from each input gate we learn only one key compounding to its output wire.
As the output labels were randomized, we also do not know if that key corresponds to a 0 or a 1.
%
For the next level of gates we again have only one key per input wire, and our argument continues.
%
So for each wire of the circuit we can only know one key corresponding to an output value for the wire.
Everything else is random garbage.
%
As we control the mapping from output keys to output values, we can set this to whatever is needed to
match the expected output.
Security only holds for evaluation of the circuit with one set of input values and
we assume that the circuit is combinatorial and thus acyclic.
% with two input all 0 or all 1 all broken
% even with just 2 keys for one input wire - broken.
% !TEX root = collection.tex
\section{Malicious attacker intead of semi-honest attacker}
The assumption we had before consisted of a semi-honest attacker instead of a malicious attacker. A malicious attacker does not have to follow the protocol, and may instead alter the original protocol. The main idea here is that we can convert a protocol aimed at semi-honest attackers into one that will work with malicious attackers.
At the beginning of the protocol, we have each party commit to its inputs:
Given a commitment protocol $com$, Party 1 produces
\begin{center}
$c_1 = com(x_1; w_1)$ \\
$d_1 = com(r_1; \phi_1)$ \\
\end{center}
Party 2 produces
\begin{center}
$c_2 = com(x_2; w_2)$\\
$d_2 = com(r_2; \phi_2)$
\end{center}
We have the following guarantee: $\exists x_i, r_i, w_i, \phi_i$ such that $c_i = com(x_i; w_i) \wedge d_i = com(r_i; \phi_i) \wedge t = \pi(i,\text{transcript}, x_i, r_i)$, where transcript is the set of messages sent in the protocol so far.
Here we have a potential problem. Since both parties are choosing their own random coins, we have to be able to enforce that the coins are \emph{indeed} random. We can solve this by using the following protocol:
\begin{center}
\begin{picture}(200,100)(10,20)
\put(20, 90){$d_1 = com(s_1; \phi_1)$}
\put(20,80){\vector(1,0){50}}
\put(150, 90){$d_2 = com(s_2; \phi_2)$}
\put(200, 80){\vector(-1,0){50}}
\put(20, 60){$s_2^{'}$}
\put(20,50){\vector(1,0){50}}
\put(200, 60){$s_1^{'}$}
\put(200, 50){\vector(-1,0){50}}
\end{picture}
\end{center}
We calculate $r_1 = s_1 \oplus s_1^{'}$, and $r_2 = s_2 \oplus s_2^{'}$. As long as one party is picking the random coins honestly, both parties would have truly random coins.
Furthermore, during the first commitment phase, we want to make sure that the committing party actually knows the value that is being committed to. Thus, we also attach along with the commitment a zero-knowledge proof of knowledge (ZK-PoK) to prove that the committing party knows the value that is being committed to.
\subsection{Zero-knowledge proof of knowledge (ZK-PoK)}
\begin{definition}[ZK-PoK] Zero-knowlwedge proof of knowledge (ZK-PoK) is a zero-knowledge proof system $(P,V)$ with the property proof of knowledge with knowledge error $\kappa$:
$\exists$ a PPT $E$ (knowledge extractor) such that $\forall x \in L$ and $\forall P^{*}$ (possibly unbounded), it holds that if $\Pr[Out_V(P^{*}(x,w) \leftrightarrow V(x))]> \kappa(x)$, then
\[ \Pr[E^{P^*}(x) \in R(x)] \geq \Pr[Out_V(P^{*} \leftrightarrow V(x))] = 1]- \kappa(x).\]
Here we have $L$ be the language, $R$ be the relation, and $R(x)$ is the set such that $\forall w \in R(x)$, $(x, w) \in R$.
\end{definition}
Given a zero-knowledge proof system, we can construct a ZK-PoK system for statement $x\in L$ with witness $w$ as follows:
\begin{center}
\begin{picture}(300,300)(10,20)
\put(10, 290){$P$}
\put(290, 290){$V$}
\put(10, 270){$r \leftarrow \{0, 1\}^{|w|}$}
\put(100, 260){$c_1 = com(r; \omega)$}
\put(100, 250){$c_2 = com(r \oplus w; \phi)$}
\put(100, 240){\vector(1,0){100}}
\put(150, 210){$b$}
\put(200, 200){\vector(-1,0){100}}
\put(120, 160){if $b = 0$, open $c_1$ to reveal $r$}
\put(120, 150){else open $c_2$ to reveal $r \oplus w$}
\put(100, 140){\vector(1,0){100}}
\put(120, 60){\framebox(50,50)[c]{ZK Proof}}
\end{picture}
\end{center}
The last ZK proof proves that $\exists r, w, \omega, \phi$ such that $(x, w) \in R$ and $c_1 = com(r; \omega)$, $c_2 = com(r \oplus w; \phi)$.
\section*{Exercises}
\begin{exercise}
Given a (secure against malicious adversaries) two-party secure computation protocol (and nothing else) construct a (secure against malicious adversaries) three-party secure computation protocol.
\end{exercise}
% % !TEX root = collection.tex
% \chapter{Obfustopia}
% \section{Witness Encryption: A Story}\label{story}
% Imagine that a billionaire who loves mathematics, would like to award with 1 million dollars the mathematician(s) who will prove the Riemann Hypothesis. Of course, neither does the billionaire know if the Riemann Hypothesis is true, nor if he will be still alive (if and) when a mathematician will come up with a proof. To overcome these couple of problems, the billionaire decides to:
% \begin{enumerate}
% \item Put 1 million dollars in gold in a big treasure chest.
% \item Choose an arbitrary place of the world, dig up a hole, and hide the treasure chest.
% \item Encrypt the coordinates of the treasure chest in a message so that only the mathematician(s) who can actually prove the Riemann Hypothesis can decrypt it.
% \item Publish the ciphertext in every newspaper in the world.
% \end{enumerate}
% The goal of this lecture is to help the billionaire with step 3. To do so, we will assume for simplicity that the proof is at most 10000 pages long. The latter assumption implies that the language
% \begin{align*}
% L = \{ x \text{ such that } x \text{ is an acceptable Riemann Hypothesis proof} \}
% \end{align*}
% is in NP and therefore, using a reduction, we can come up with a circuit $C$ that takes as input $x$ and outputs $1$ if $x$ is a proof for the Riemann Hypothesis and $0$ otherwise.
% \smallskip
% Our goal now is to design a pair of PPT machines $(\mathrm{Enc},\mathrm{Dec})$ such that:
% \begin{enumerate}
% \item $\mathrm{Enc}(C,m)$ takes as input the circuit $C$ and $m \in \{0,1\}$ and outputs a ciphertext $e \in \{0,1\}^{*}$.
% \item $\mathrm{Dec}(C,e,w)$ takes as input the circuit $C$, the cipertext $e$ and a witness $w \in \{0,1\}^{*}$ and outputs $m$ if if $C(w) = 1$ or $\perp$ otherwise.
% \end{enumerate}
% and so that they satisfy the following correctness and security requirements:
% \begin{itemize}
% \item \textbf{Correctness:} If $\exists w$ such that $C(w) = 1$ then $\mathrm{Dec}(C,e,w)$ outputs $m$.
% \item \textbf{Security:} If $\nexists w$ such that $C(w) = 1$ then $\mathrm{Enc}(C,0) \approx^{c} \mathrm{Enc}(C,1) \!\ $ (where $ \approx^{c}$ means ``computationally indistinguishable'').
% \end{itemize}
% \section{A Simple Language }
% As a first example, we show how we can design such an encryption scheme for a simple language. Let $G$ be a group of prime order and $g$ be a generator of the group. For elements $A, B, T \in G$ consider the language $L = \{(a,b): A = g^a, B = g^b, T = g^{ab} \}$. An encryption scheme for that language with the correctness and security requirements of Section~\ref{story} is the following:
% \smallskip
% \begin{itemize}
% \item \textbf{Encryption$(g,A,B,T,G)$:}
% \begin{itemize}
% \item Choose elements $r_1, r_2 \in \mathbb{Z}_p^*$ uniformly and independently.
% \item Let $c_1 = A^{r_1} g^{r_2} $, $c_2 = g^m T^{r_1} B^{r_2}$, where $m \in \{0,1\}$ is the message we want to encrypt.
% \item Output $c = (c_1, c_2)$
% \end{itemize}
% \item \textbf{Decryption($b$):}
% \begin{itemize}
% \item Output $\frac{c_2}{c_1^b}$
% \end{itemize}
% \end{itemize}
% \textbf{Correctness:}
% The correcntess of the above encryption scheme follows from the fact that if there exist $(a,b) \in L$ then:
% \begin{eqnarray*}
% \frac{c_2}{c_1^b} & = & \frac{g^m T^{r_1} B^{r_2} }{ \left( A^{r_1}g^{r_2}\right)^b } \\
% & = & \frac{g^m \left(g^{ab}\right)^{r_1} \left( g^{b} \right)^{r_2} }{ \left( g^{a} \right)^{r_1 b} g^{r_2 b} } \\
% & = & g^{m}
% \end{eqnarray*}
% Since $m \in \{0,1\}$ and we know $g$, the value of $g^m$ implies the value of $m$.
% \smallskip
% \textbf{Security:}
% As far as the security of the scheme is concerned, since $L$ is quite simple, we can actually prove that $m$ is information-theoretically hidden. To see this, assume there does not exist $(a,b) \in L$, but an adversary has the power to compute discrete logarithms. In that case, given $c_1$ and $c_2$ the adversary could get a system of the form:
% \begin{eqnarray*}
% ar_1 + r_2 & = & s_1 \\
% m + r r_1 + b r_2 &=& s_2
% \end{eqnarray*}
% where $s_1$ and $s_2$ are the discrete logarithms of $c_1$ and $c_2$ respectively (with base $g$), and $r \ne ab$ is an element of $ \mathbb{Z}_{p}^*$ such that $T = g^r$. Observe now that for each value of $m$ there exist numbers $r_1$ and $r_2$ so that the above system has a solution, and thus $m$ is indeed information-theoretically hidden (on the other hand, if we had that $ab = r$ then the equations are linearly dependent).
% \newpage
% \section{An NP Complete Language }
% In this section we focus on our original goal of designing an encryption for an NP complete language $L$. Specifically, we will consider the NP-complete problem \emph{exact cover}. Besides that, we introduce the $n$-Multilinear Decisional Diffie-Hellman ($n$-MDDH) assumption and the Decisional Multilinear No-Exact-Cover Assumption. %(see also~\cite{Sanjam}).
% The latter will guarantee the security of our construction.
% \subsection{ Exact Cover}
% We are given as input $x = (n, S_1, S_2, \ldots, S_l)$, where $n$ is an integer and each $S_i, i \in [l]$ is a subset of $[n]$, and our goal is to find a subset of indices $T \subseteq [l]$ such that:
% \begin{enumerate}
% \item $\cup_{i \in T} S_i = [n] $ and
% \item $\forall i, j \in T$ such that $i \ne j$ we have that $S_i \cap S_j = \emptyset$.
% \end{enumerate}
% If such a $T$ exists, we say that $T$ is an exact cover of $x$.
% \subsection{Multilinear Maps}
% Mutlinear maps is a generalization of bilinear maps (which we have already seen) that will be useful in our construction. Specifically, we assume the existence of a group generator $\mathcal{G}$, which takes as input a security parameter $\lambda$ and a positive integer $n$ to indicate the number of allowed operations. $\mathcal{G}(1^{\lambda},n)$ outputs a sequence of groups $\vec{\mathbb{G}}= (\mathbb{G}_1, \mathbb{G}_2, \ldots, \mathbb{G}_n)$ each of large prime order $P > 2^{\lambda}$. In addition, we let $g_i$ be a canonical generator of $\mathbb{G}_i$ (and is known from the group's description).
% We also assume the existence of a set of bilinear maps $\{e_{i,j}: \mathbb{G}_i \times \mathbb{G}_j \rightarrow \mathbb{G}_{i+j} \mid i, j \ge 1; i+j \le n \}.$ The map $e_{i,j}$ satisfies the following relation:
% \begin{align}
% e_{i,j}\left(g_i^{a},g_j^{b}\right) = g^{ab}_{i+j}: \forall a,b \in \mathbb{Z}_p \label{vasikoni}
% \end{align}
% and we observe that one consequence of this is that $e_{i,j} (g_i, g_j) = g_{i+j}$ for each valid $i,j$.
% \subsection{The $n$-MDDH Assumption }
% The $n$-Multilinear Decisional Diffie-Hellman ($n$-MDDH) problem states the following: A challenger runs $\mathcal{G}(1^{\lambda},n ) $ to generate groups and generators of order $p$. Then it picks random $s, c_1, \ldots, c_n \in \mathbb{Z}_p$. The assumption then states that given $g= g_1, g^{s}, g^{c_1}, \ldots,g^{c_n}$ it is hard to distinguish $T = g_n^{s \prod_{j \in [1,n ] } c_j}$ from a random group element in $G_n$, with better than negligible advantage (in security parameter $\lambda$).
% \newpage
% \subsection{Decisional Multilinear No-Exact-Cover Assumption}
% Let $x = (n, S_1, \ldots, S_l)$ be an instance of the exact cover problem that has no solution. Let $\mathrm{param} \leftarrow \mathcal{G}(1^{1+n},n)$ be a description of a multilinear group family with order $p = p(\lambda)$. Let $a_1, a_2, \ldots, a_n,r$ be uniformly random in $\mathbb{Z}_p$. For $i \in [l]$, let $c_i = g_{|S_i|}^{ \prod_{j \in S_i} a_j}$. Distiguish between the two distributions:
% \begin{align*}
% (\mathrm{params}, c_1, \ldots,c_l,g_n^{a_1a_2\ldots a_n}) \text{ and } (\mathrm{params},c_1, \ldots,c_l,g_n^r)
% \end{align*}
% The Decisional Multilinear No-Exact-Cover Assumption is that for all adversaries $\mathcal{A}$, there exists a fixed negligible function $\nu(\cdot)$ such that for all instances $x$ with no solution, $\mathcal{A}$'s distinguishing advantage against the Decisional Multilinear No-Exact-Cover Problem for $x$ is at most $\nu(\lambda)$.
% \subsection{The Encryption Scheme }
% We are now ready to give the description of our encryption scheme.
% \begin{itemize}
% \item $\mathrm{Enc}(x,m)$ takes as input $x=(n, S_1, \ldots,S_l)$ and the message $m \in \{0,1\}$ and:
% \begin{itemize}
% \item Samples $a_{0}, a_1, \ldots, a_{n}$ uniformly and independently from $\mathbb{Z}_p^*$.
% \item $\forall i \in [l]$ let $c_i = g^{\prod_{j\in S_j} a_j}_{|S_i|}$
% \item Sample uniformly an element $r \in \mathbb{Z}_p^*$
% \item Let $d = d(m) $ be $ g_n^{\prod_{j \in [n]}a_j}$ if $m = 1 $ or $g_n^r$ if $m = 0$.
% \item Output $c = (d, c_1, \ldots,c_l)$
% \end{itemize}
% \item $\mathrm{Dec}(x,T)$, where $T \subseteq[l]$ is a set of indices, computes $\prod_{i \in T}c_i$ and outputs $1$ if the latter value equals to $d$ or $0$ otherwise.
% \end{itemize}
% \begin{itemize}
% \item \textbf{Correctness:} Assume that $T$ is an exact cover of $x$. Then, it is not hard to see that:
% \begin{eqnarray*}
% \prod_{i \in T} c_i & = & \prod_{i \in T} g^{\prod_{j\in S_j} a_j}_{|S_i|} \\
% & = & g_n^{\prod_{j \in [n]}a_j}
% \end{eqnarray*}
% where we have used~\eqref{vasikoni} repeatedly and the fact that $T$ is an exact cover (to show that $\sum_{i \in T} |S_i| = n$ and that $\prod_{i \in T} \prod_{j \in S_i} a_j = \prod_{i \in [n]} a_i$).
% \item \textbf{Security:} Intuitively, the construction is secure, since the only way to make $g_n^{\prod_{ j \in [n] }a_i}$ is to find an exact cover of $[n]$. As a matter of fact, observe that if an exact cover does not exist, then for each subset of indices $T'$( such that $\cup_{i \in T'}S_j = [n]$) we have that
% \begin{align*}
% \sum_{i =1 }^{n} |S_i| > n,
% \end{align*}
% which means that $\prod_{i \in T} \prod_{j \in S_i} a_j$ is different than $\prod_{j \in [n]}a_j$. Formally, the security is based on the Decisional Multilinear No-Exact-Cover Assumption.
% \end{itemize}
% %\bibliographystyle{plain}
% %\bibliography{smoser}
% % !TEX root = collection.tex
% %\newcommand{\norm}[1]{\left\Vert#1\right\Vert}
% \newcommand{\ABS}[1]{\left\vert#1\right\vert}
% \newcommand{\SET}[1]{\left\{#1\right\}}
% \newcommand{\INP}[1]{\left(#1\right)}
% \newcommand{\POLY}[1]{\ensuremath{\mathop{\mathrm{poly}}\INP{#1}}}
% %\newcommand{\iO}[1]{\ensuremath{\mathop{i\mathcal{O}}\INP{#1}}}
% \newcommand{\ENC}[1]{\ensuremath{\mathop{\mathrm{Enc}}\INP{#1}}}
% \newcommand{\DEC}[1]{\ensuremath{\mathop{\mathrm{Dec}}\INP{#1}}}
% %\bibliographystyle{plain}
% \section{Obfuscation}
% The problem of program obfuscation asks whether one can transform a program (e.g., circuits, Turing machines) to another semantically equivalent program (i.e., having the same input/output behavior), but is otherwise intelligible.
% It was originally formalized by Barak et al. who constructed a family of circuits that are non-obfuscatable under the most natural virtual black box (VBB) security.
% \section{VBB Obfuscation}
% As a motivation, recall that in a private-key encryption setting, we have a secret key $k$, encryption $E_k$ and decryption $D_k$.
% A natural candidate for public-key encryption would be to simply release an encryption $E'_k \equiv E_k$ (i.e. $E'_k$ semantically equivalent to $E_k$, but computationally bounded adversaries would have a hard time figuring out $k$ from $E'_k$.
% \begin{definition}[Obfuscator of circuits under VBB]
% $O$ is an \emph{obfuscator} of circuits if %for every circuit $c$ we have,
% \begin{enumerate}
% \item
% Correctness:
% $\forall c, O(c) \equiv c$.
% \item
% Efficiency:
% $\forall c, \ABS{O(c)} \le \POLY{\ABS{c}}$.
% \item
% VBB:
% $\forall A, A$ is PPT bounded, $\exists$ S (also PPT) s.t. $\forall c$,
% \[
% \ABS{\Pr\left[ A\left( O(c) \right) = 1\right] - \Pr\left[ S^c(1^{\ABS{c}}) = 1 \right]} \le \mathrm{negl}(\ABS{c}).
% \]
% \end{enumerate}
% \end{definition}
% Similarly we can define it for Turing machines.
% \begin{definition}[Obfuscator of TMs under VBB]
% $O$ is an \emph{obfuscator} of Turing machines if %for every circuit $c$ we have,
% \begin{enumerate}
% \item
% Correctness:
% $\forall M, O(M) \equiv M$.
% \item
% Efficiency:
% $\exists q(\cdot) = \POLY{\cdot}, \forall M \left( M(x) \hbox{ halts in }t \hbox{ steps} \implies O(M)(x) \hbox{ halts in }q(t) \hbox{ steps}\right)$.
% \item
% VBB:
% Let $M'(t,x)$ be a TM that runs $M(x)$ for $t$ steps.
% $\forall A, A$ is PPT bounded, $\exists$ Sim (also PPT) s.t. $\forall c$,
% \[
% \ABS{\Pr\left[ A\left( O(M) \right) = 1\right] - \Pr\left[ S^{M'}(1^{\ABS{M'}}) = 1 \right]} \le \mathrm{negl}(\ABS{M'}).
% \]
% \end{enumerate}
% \end{definition}
% Let's show that our candidate PKE from VBB obfuscator $O$ is semantic secure, using a simple hybrid argument.
% \proof
% Recall the public key $PK=O(E_k)$.
% Let's assume $E_k$ is a circuit.
% \begin{align*}
% H_0 :& A(\SET{(PK, E_k(m_0))}) & \\
% H_1 :& S^c(\SET{E_k(m_0)}) & \hbox{ by VBB} \\
% H_2 :& S^c(\SET{E_k(m_1)}) & \hbox{ by semantic security of private key encryption} \\
% H_3 :& A(\SET{(PK, E_k(m_1))}) & \hbox{ by VBB}
% \end{align*}
% \qed
% Unfortunately VBB obfuscator for all circuits does not exist. Now we show the impossiblity result of VBB obfuscator.
% \begin{theorem}
% Let $O$ be an obfuscator.
% There exists PPT bounded $A$, and a family (ensemble) of functions $\SET{H_n}$, $\SET{Z_n}$ s.t.
% for every PPT bounded simulator $S$,
% \begin{gather*}
% A\left( O(H_n) \right) = 1 \ \ \& \ \ A\left( O(Z_n) \right) = 0\\
% \ABS{\Pr\left[ S^{H_n} \left( 1^{\ABS{H_n}} \right) = 1 \right] - \Pr \left[ S^{Z_n} \left(1^{\ABS{Z_n}}\right) =1 \right]} \le\mathrm{negl}(n).
% \end{gather*}
% \end{theorem}
% \proof
% Let $\alpha, \beta \overset{\$}{\leftarrow} \SET{0,1}^n$.
% We start by constructing $A',C_{\alpha,\beta}, D_{\alpha,\beta}$ s.t.
% \begin{gather*}
% A'\left( O(C_{\alpha,\beta}), O(D_{\alpha,\beta}) \right) = 1 \ \ \& \ \ A'\left( O(Z_n), O(D_{\alpha,\beta}) \right) = 0\\
% \ABS{\Pr\left[ S^{C_{\alpha,\beta},D_{\alpha,\beta}} \left( \mathbf{1} \right) = 1 \right] - \Pr \left[ S^{Z_n,D_{\alpha,\beta}} \left(\mathbf{1}\right) =1 \right]} \le\mathrm{negl}(n).
% \end{gather*}
% \begin{gather*}
% C_{\alpha,\beta}(x) =
% \begin{cases}
% \beta, & \hbox{if } x = \alpha,\\
% 0^n, & \hbox{o/w}
% \end{cases} \\
% D_{\alpha,\beta}(c)=
% \begin{cases}
% 1,& \hbox{if } c(\alpha) = \beta,\\
% 0, & \hbox{o/w}.
% \end{cases}
% \end{gather*}
% Clearly $A'(X,Y) = Y(X)$ works.
% Now notice that input length to $D$ grows as the size of $O(C)$.
% However for Turing machines which can have the same description length, one could combine the two in the following way:
% $F_{\alpha,\beta}(b, x) =
% \begin{cases}
% C_{\alpha,\beta}(x), & b=0\\
% D_{\alpha,\beta}(x), & b=1\\
% \end{cases}.$
% Let $OF= O(F_{\alpha,\beta})$, $OF_0(x) = OF(0,x)$, similarly for $OF_1$, then $A$ would be just $A(OF) = OF_1(OF_0)$.
% Now assuming OWF exists, specifically we already have priavte-key encryption, we modify $D$ as follows.
% \begin{gather*}
% D_k^{\alpha,\beta}(1,i) = \mathrm{Enc}_k(\alpha_i) \\
% D_k^{\alpha,\beta}(2,c,d,\odot) = \mathrm{Enc}_k(\mathrm{Dec}_k(c) \odot \mathrm{Dec}_k(d)), \hbox{where $\odot$ is a gate of AND, OR, NOT} \\
% D_k^{\alpha,\beta}(3, \gamma_1,\cdots,\gamma_n) =
% \begin{cases}
% 1,& \forall i, \mathrm{Dec}_k(\gamma_i) = \beta_i,\\
% 0, & \hbox{o/w}.
% \end{cases}
% \end{gather*}
% Now the adversary $A$ just simulate $O(C)$ gate by gate with a much smaller $O(D)$, thus we can use the combining tricks as for the Turing machines.
% \qed
% \section{Indistinguishability Obfuscation}
% %\begin{definition}[Indistinguishability Obfuscation]
% % $\iO{\cdot}$ is an \emph{indistinguishability obfuscation} if $\forall c_1, c_2$ such that $\ABS{c_1}= \ABS{c_2}$ and $c_1\equiv c_2$, we have
% % \[
% % \iO{c_1} \overset{c}{\approx} \iO{c_2}.
% % \]
% %\end{definition}
% %\newcommand{\iO}{\ensuremath{i\mathcal{O}}}
% \newcommand{\Ck}{\ensuremath{\mathcal{C}_\kappa}}
% \begin{definition}[Indistinguishability Obfuscator]
% A uniform PPT machine $\iO$ is an \emph{indistinguishability obfuscator}
% for a collection of circuits $\Ck$ if the following conditions hold:
% \begin{itemize}
% \item \emph{Correctness.}
% For every circuit $C \in \Ck$ and for all inputs $x$,
% $C(x) = \iO(C(x))$.
% \item \emph{Polynomial slowdown.}
% For every circuit $C \in \Ck$, $|\iO(C)| \leq p(|C|)$ for some
% polynomial $p$.
% \item \emph{Indistinguishability.}
% For all pairs of circuits $C_1, C_2 \in \Ck$, if $|C_1| = |C_2|$ and
% $C_1(x) = C_2(x)$ for all inputs $x$, then
% $\iO(C_1) \overset{c}{\simeq} \iO(C_2)$.
% More precisely, there is a negligible function $\nu(k)$ such that for
% any (possibly non-uniform) PPT $A$,
% \begin{equation*}
% \big| \Pr[A(\iO(C_1)) = 1] - \Pr[A(\iO(C_2)) = 1] \big| \leq \nu(k).
% \end{equation*}
% \end{itemize}
% \end{definition}
% \begin{lemma}
% Indistinguishability obfuscation implies witness encryption.
% \end{lemma}
% \proof
% Recall the witness encryption scheme, with which one could encrypt a message $m$ to an instance $x$ of an NP language $L$, such that $\DEC{x,w,\ENC{x,m}}=
% \begin{cases}
% m, \hbox{if} (x,w)\in L, \\
% \bot, \hbox{o/w}
% \end{cases}$
% Let $C_{x,m}(w)$ be a circuit that on input $w$, outputs $m$ if and only if $(x,w) \in L$.
% Now we construct witness encryption as follows:
% $\ENC{x,m}=\iO{C_{x,m}}, \DEC{x,w,c}=c(w)$.
% Semantic security follows from the fact that, for $x\not\in L$, $C_{x,m}$ is just a circuit that always output $\bot$, and by indistinguishability obfuscation, we could replace it with a constant circuit (padding if necessary), and then change the message, and change the circuit back.
% \qed
% \begin{lemma}
% Indistinguishability obfuscation and OWFs imply public key encryption.
% \end{lemma}
% \proof
% We'll use a length doubling PRG $F: \SET{0,1}^n \to \SET{0,1}^{2n}$, together with a witness encryption scheme $(E,D)$.
% The NP language for the encryption scheme would be the image of $F$.
% \begin{align*}
% &\mathrm{Gen}(1^n) = (PK = F(s), SK=s), s\overset{\$}{\leftarrow} \SET{0,1}^n\\
% &\ENC{PK,m} = E(x=PK,m)\\
% &\DEC{e,SK=s} = D(x=PK,w=s,c=e).
% \end{align*}
% \qed
% \begin{lemma}
% Every best possible obfuscator could be equivalently achieved with an indistinguishability obfuscation (up to padding and computationally bounded).
% \end{lemma}
% \proof
% Consider circuit $c$, the \emph{best possible obfuscated} $BPO(c)$, and $c'$ which is just padding $c$ to the same size of $BPO(c)$.
% Computationally bounded adversaries cannot distinguish between $\iO{c'}$ and $\iO{BPO(c)}$.
% Note that doing iO never decreases the ``entropy'' of a circuit, so $\iO{BPO(c)}$ is at least as secure as $BPO(c)$.
% \qed
% % !TEX root = collection.tex
% %\section{Using Indistinguishability Obfuscation}