I am currently a Postdoctoral Researcher at Georgetown University, hosted by Professor Justin Thaler. Until May 2024, I was also a joint postdoc at the University of Maryland, hosted by Professor Jonathan Katz. I received my PhD in Computer Science from Purdue University in 2022 and was extremely fortunate to be advised by Professor Jeremiah Blocki. My broad research interests are in Theoretical Cryptography, Coding Theory, Efficient Proof and Argument Systems, and Fiat-Shamir security.
I was a Graduate Fellow during the Summer of 2019 at the Reichman University (IDC Herzliya) FACT Center under the guidance of Alon Rosen.
I received my M.Sc. in Computer Science from Purdue University in 2019. Prior to that, I received a B.S. in both Mathematics (Hons.) and Information & Computer Science (Hons.) from the University of California, Irvine, and graduated with Campuswide Honors from the Campuswide Honors Collegium (formerly known as the Campuswide Honors Program).
All authors are listed in alphabetical order by last name. More recent papers appear first.
FRI is a cryptographic protocol widely deployed today as a building block of many efficient SNARKs that help secure transactions of hundreds of millions of dollars per day. The Fiat-Shamir security of FRI—vital for understanding the security of FRI-based SNARKs—has only recently been formalized and established by Block et al. (ASIACRYPT '23).
In this work, we complement the result of Block et al. by providing a thorough concrete security analysis of non-interactive FRI under various parameter settings from protocols deploying (or soon to be deploying) FRI today. We find that these parameters nearly achieve their desired security targets (being at most $1$-bit less secure than their targets) for non-interactive FRI with respect to a certain security conjecture about the FRI Protocol. However, in all but one set of parameters, we find that the provable security of non-interactive FRI under these parameters is severely lacking, being anywhere between $21$- and $63$-bits less secure than the conjectured security. The conjectured security of FRI assumes that known attacks are optimal, the security of these systems would be severely compromised should a better attack be discovered. In light of this, we present parameter guidelines for achieving $100$-bits of provable security for non-interactive FRI along with a methodology for tuning these parameters to suit the needs of protocol designers.
Efficient realizations of succinct non-interactive arguments of knowledge (SNARKs) have gained popularity due to their practical applications in various domains. Among existing schemes, those based on error-correcting codes are of particular interest because of their good concrete efficiency, transparent setup, and plausible post-quantum security. However, many existing code-based SNARKs suffer from the disadvantage that they only work over specific finite fields.
In this work, we construct a code-based SNARK that does not rely on any specific underlying field; i.e., it is field-agnostic. Our construction follows the framework of Brakedown and builds a polynomial commitment scheme (and hence a SNARK) based on recently introduced expand-accumulate codes. Our work generalizes these codes to arbitrary finite fields; our main technical contribution is showing that, with high probability, these codes have constant rate and constant relative distance (crucial properties for building efficient SNARKs), solving an open problem from prior work.
As a result of our work we obtain a SNARK where, for a statement of size $M$, the prover time is $O(M\log M)$ and the proof size is $O(\sqrt{M})$. We demonstrate the concrete efficiency of our scheme empirically via experiments. Proving ECDSA verification on the secp256k1 curve requires only 0.23s for proof generation, 2 orders of magnitude faster than SNARKs that are not field-agnostic. Compared to the original Brakedown result (which is also field-agnostic), we obtain proofs that are 1.9-2.8$\times$ smaller due to the good concrete distance of our underlying error-correcting code, while introducing only a small overhead of 1.2$\times$ in the prover time.
We establish new results on the Fiat-Shamir (FS) security of several protocols that are widely used in practice, and we provide general tools for establishing similar results for others. More precisely, we: (1) prove the FS security of the FRI and batched FRI protocols; (2) analyze a general class of protocols, which we call $\delta$-correlated, that use low-degree proximity testing as a subroutine (this includes many "Plonk-like" protocols (e.g., Plonky2 and Redshift), ethSTARK, RISC Zero, etc.); and (3) prove FS security of the aforementioned "Plonk-like" protocols, and sketch how to prove the same for the others.
We obtain our first result by analyzing the round-by-round (RBR) soundness and RBR knowledge soundness of FRI. For the second result, we prove that if a -correlated protocol is RBR (knowledge) sound under the assumption that adversaries always send low-degree polynomials, then it is RBR (knowledge) sound in general. Equipped with this tool, we prove our third result by formally showing that "Plonk-like" protocols are RBR (knowledge) sound under the assumption that adversaries always send low-degree polynomials. We then outline analogous arguments for the remainder of the aforementioned protocols.
To the best of our knowledge, ours is the first formal analysis of the Fiat-Shamir security of FRI and widely deployed protocols that invoke it.
Locally Decodable Codes (LDCs) are error-correcting codes $C \colon \Sigma^n \rightarrow \Sigma^m$ with super-fast decoding algorithms. They are important mathematical objects in many areas of theoretical computer science, yet the best constructions so far have codeword length m that is super-polynomial in n, for codes with constant query complexity and constant alphabet size. In a very surprising result, Ben-Sasson et al. showed how to construct a relaxed version of LDCs (RLDCs) with constant query complexity and almost linear codeword length over the binary alphabet, and used them to obtain significantly-improved constructions of Probabilistically Checkable Proofs. In this work, we study RLDCs in the standard Hamming-error setting, and introduce their variants in the insertion and deletion (Insdel) error setting. Insdel LDCs were first studied by Ostrovsky and Paskin-Cherniavsky, and are further motivated by recent advances in DNA random access bio-technologies, in which the goal is to retrieve individual files from a DNA storage database. Our first result is an exponential lower bound on the length of Hamming RLDCs making 2 queries, over the binary alphabet. This answers a question explicitly raised by Gur and Lachish. Our result exhibits a "phase-transition"-type behavior on the codeword length for constant-query Hamming RLDCs. We further define two variants of RLDCs in the Insdel-error setting, a weak and a strong version. On the one hand, we construct weak Insdel RLDCs with with parameters matching those of the Hamming variants. On the other hand, we prove exponential lower bounds for strong Insdel RLDCs. These results demonstrate that, while these variants are equivalent in the Hamming setting, they are significantly different in the insdel setting. Our results also prove a strict separation between Hamming RLDCs and Insdel RLDCs.
We revisit computationally relaxed locally decodable codes (crLDCs) (Blocki et al., Trans. Inf. Theory '21) and give two new constructions. Our first construction is a Hamming crLDC that is conceptually simpler than prior constructions, leveraging digital signature schemes and an appropriately chosen Hamming code. Our second construction is an extension of our Hamming crLDC to handle insertion-deletion (InsDel) errors, yielding an InsDel crLDC. This extension crucially relies on the noisy binary search techniques of Block et al. (FSTTCS '20) to handle InsDel errors. Both crLDC constructions have binary codeword alphabets, are resilient to a constant fraction of Hamming and InsDel errors, respectively, and under suitable parameter choices have poly-logarithmic locality and encoding length linear in the message length and polynomial in the security parameter. These parameters compare favorably to prior constructions in the poly-logarithmic locality regime.
We formally introduce, define, and construct memory-hard puzzles. Intuitively, for a difficulty parameter $t$, a cryptographic puzzle is memory-hard if any parallel random access machine (PRAM) algorithm with "small" cumulative memory complexity ($\ll t$) cannot solve the puzzle; moreover, such puzzles should be both "easy" to generate and be solvable by a sequential RAM algorithm running in time $t$. Our definitions and constructions of memory-hard puzzles are in the standard model, assuming the existence of indistinguishability obfuscation ($i\mathcal{O}$) and one-way functions (OWFs), and additionally assuming the existence of a memory-hard language. Intuitively, a language is memory-hard if it is undecidable by any PRAM algorithm with "small" cumulative memory complexity, while a sequential RAM algorithm running in time $t$ can decide the language. Our definitions and constructions of memory-hard objects are the first such definitions and constructions in the standard model without relying on idealized assumptions (such as random oracles).
We give two applications which highlight the utility of memory-hard puzzles. For our first application, we give a construction of a (one-time) memory-hard function (MHF) in the standard model, using memory-hard puzzles and additionally assuming $i\mathcal{O}$ and OWFs. For our second application, we show any cryptographic puzzle (e.g., memory-hard, time-lock) can be used to construct resource-bounded locally decodable codes (LDCs) in the standard model, answering an open question of Blocki, Kulkarni, and Zhou (ITC 2020). Resource-bounded LDCs achieve better rate and locality than their classical counterparts under the assumption that the adversarial channel is resource bounded (e.g., a low-depth circuit). Prior constructions of MHFs and resource-bounded LDCs required idealized primitives like random oracles.
We construct public-coin time- and space-efficient zero-knowledge arguments for $\mathbf{NP}$. For every time $T$ and space $S$ non-deterministic RAM computation, the prover runs in time $T \cdot \mathrm{polylog}(T)$ and space $S \cdot \mathrm{polylog}(T)$, and the verifier runs in time $n \cdot \mathrm{polylog}(T)$, where $n$ is the input length. Our protocol relies on hidden order groups, which can be instantiated with a trusted setup from the hardness of factoring (products of safe primes), or without a trusted setup using class groups. The argument-system can heuristically be made non-interactive using the Fiat-Shamir transform.
Our proof builds on DARK (Bünz et al., Eurocrypt 2020), a recent succinct and efficiently verifiable polynomial commitment scheme. We show how to implement a variant of DARK in a time- and space-efficient way. Along the way we:
In proving these results, we develop general-purpose techniques for working with (hidden order) groups, which may be of independent interest.
$P_4$-free graphs--also known as cographs, complement-reducible graphs, or hereditary Dacey graphs--have been well studied in graph theory. Motivated by computer science and information theory applications, our work encodes (flat) joint probability distributions and Boolean functions as bipartite graphs and studies bipartite $P_4$-free graphs. For these applications, the graph properties of edge partitioning and covering a bipartite graph using the minimum number of these graphs are particularly relevant. Previously, such graph properties have appeared in leakage-resilient cryptography and (variants of) coloring problems.
Interestingly, our covering problem is closely related to the well-studied problem of product/Prague dimension of loopless undirected graphs, which allows us to employ algebraic lower-bounding techniques for the product/Prague dimension. We prove that computing these numbers is $\mathbf{NP}$-complete, even for bipartite graphs. We establish a connection to the (unsolved) Zarankiewicz problem to show that there are bipartite graphs with size-$N$ partite sets such that these numbers are at least $\varepsilon \cdot N^{1-2\varepsilon}$, for $\varepsilon \in \{1/3, 1/4, 1/5,\dotsc\}$. Finally, we accurately estimate these numbers for bipartite graphs encoding well-studied Boolean functions from circuit complexity, such as set intersection, set disjointness, and inequality.
For applications in information theory and communication & cryptographic complexity, we consider a system where a setup samples from a (flat) joint distribution and gives the participants, Alice and Bob, their portion from this joint sample. Alice and Bob's objective is to non-interactively establish a shared key and extract the left-over entropy from their portion of the samples as independent private randomness. A genie, who observes the joint sample, provides appropriate assistance to help Alice and Bob with their objective. Lower bounds to the minimum size of the genie's assistance translate into communication and cryptographic lower bounds. We show that (the $\log_2$ of) the $P_4$-free partition number of a graph encoding the joint distribution that the setup uses is equivalent to the size of the genie's assistance. Consequently, the joint distributions corresponding to the bipartite graphs constructed above with high -free partition numbers correspond to joint distributions requiring more assistance from the genie.
As a representative application in non-deterministic communication complexity, we study the communication complexity of nondeterministic protocols augmented by access to the equality oracle at the output. We show that (the $\log_2$ of) the $P_4$-free cover number of the bipartite graph encoding a Boolean function $f$ is equivalent to the minimum size of the nondeterministic input required by the parties (referred to as the communication complexity of $f$ in this model). Consequently, the functions corresponding to the bipartite graphs with high $P_4$-free cover numbers have high communication complexity. Furthermore, there are functions with communication complexity close to the naïve protocol where the nondeterministic input reveals a party's input. Finally, the access to the equality oracle reduces the communication complexity of computing set disjointness by a constant factor in contrast to the model where parties do not have access to the equality oracle. To compute the inequality function, we show an exponential reduction in the communication complexity, and this bound is optimal. On the other hand, access to the equality oracle is (nearly) useless for computing set intersection.
We construct locally decodable codes (LDCs) to correct insertion-deletion errors in the setting where the sender and receiver share a secret key or where the channel is resource-bounded. Our constructions rely on a so-called "Hamming-to-InsDel" compiler (Ostrovsky and Paskin-Cherniavsky, ITS '15 & Block et al., FSTTCS '20), which compiles any locally decodable Hamming code into a locally decodable code resilient to insertion-deletion (InsDel) errors. While the compilers were designed for the classical coding setting, we show that the compilers still work in a secret key or resource-bounded setting. Applying our results to the private key Hamming LDC of Ostrovsky, Pandey, and Sahai (ICALP '07), we obtain a private key InsDel LDC with constant rate and polylogarithmic locality. Applying our results to the construction of Blocki, Kulkarni, and Zhou (ITC '20), we obtain similar results for resource-bounded channels; i.e., a channel where computation is constrained by resources such as space or time.
Recent efforts in coding theory have focused on building codes for insertions and deletions, called insdel codes, with optimal trade-offs between their redundancy and their error-correction capabilities, as well as efficient encoding and decoding algorithms.
In many applications, polynomial running time may still be prohibitively expensive, which has motivated the study of codes with super-efficient decoding algorithms. These have led to the well-studied notions of Locally Decodable Codes (LDCs) and Locally Correctable Codes (LCCs). Inspired by these notions, Ostrovsky and Paskin-Cherniavsky (Information Theoretic Security, 2015) generalized Hamming LDCs to insertions and deletions. To the best of our knowledge, these are the only known results that study the analogues of Hamming LDCs in channels performing insertions and deletions.
Here we continue the study of insdel codes that admit local algorithms. Specifically, we reprove the results of Ostrovsky and Paskin-Cherniavsky for insdel LDCs using a different set of techniques. We also observe that the techniques extend to constructions of LCCs. Specifically, we obtain insdel LDCs and LCCs from their Hamming LDCs and LCCs analogues, respectively. The rate and error-correction capability blow up only by a constant factor, while the query complexity blows up by a poly log factor in the block length. Since insdel locally decodable/correctble codes are scarcely studied in the literature, we believe our results and techniques may lead to further research. In particular, we conjecture that constant-query insdel LDCs/LCCs do not exist.
Zero-knowledge protocols enable the truth of a mathematical statement to be certified by a verifier without revealing any other information. Such protocols are a cornerstone of modern cryptography and recently are becoming more and more practical. However, a major bottleneck in deployment is the efficiency of the prover and, in particular, the space-efficiency of the protocol.
For every $\mathbf{NP}$ relation that can be verified in time $T$ and space $S$, we construct a public-coin zero-knowledge argument in which the prover runs in time $T \cdot \mathrm{polylog}(T)$ and space $S \cdot \mathrm{polylog}(T)$. Our proofs have length $\mathrm{polylog}(T)$ and the verifier runs in time $T \cdot \mathrm{polylog}(T)$ (and space $\mathrm{polylog(T)}$). Our scheme is in the random oracle model and relies on the hardness of discrete log in prime-order groups.
Our main technical contribution is a new space efficient polynomial commitment scheme for multi-linear polynomials. Recall that in such a scheme, a sender commits to a given multi-linear polynomial $P:\mathbb{F}^n \rightarrow \mathbb{F}$ so that later on it can prove to a receiver statements of the form "$P(x)=y$". In our scheme, which builds on commitments schemes of Bootle et al. (Eurocrypt 2016) and Bünz et al. (S&P 2018), we assume that the sender is given multi-pass streaming access to the evaluations of $P$ on the Boolean hypercube and we show how to implement both the sender and receiver in roughly time $2^n$ and space $n$ and with communication complexity roughly $n$.
Secure multi-party computation (MPC) allows mutually distrusting parties to compute securely over their private data. The hardness of MPC, essentially, lies in performing secure multiplications over suitable algebras. Parties use diverse cryptographic resources, like computational hardness assumptions or physical resources, to securely compute these multiplications.
There are several cryptographic resources that help securely compute one multiplication over a large finite field, say $\mathbb{GF}[2^n]$, with linear communication complexity. For example, the computational hardness assumption like noisy Reed-Solomon codewords are pseudorandom. However, it is not known if we can securely compute, say, a linear number of AND-gates from such resources, i.e., a linear number of multiplications over the base field $\mathbb{GF}[2]$. Before our work, we could only perform $o(n)$ secure AND-evaluations. This example highlights the general inefficiency of multiplying over the base field using one multiplication over the extension field. Our objective is to remove this hurdle and enable secure computation of boolean circuits while incurring a constant communication overhead based on more diverse cryptographic resources.
Technically, we construct a perfectly secure protocol that realizes a linear number of multiplication gates over the base field using one multiplication gate over a degree-$n$ extension field. This construction relies on the toolkit provided by algebraic function fields.
Using this construction, we obtain the following results. If we can perform one multiplication over $\mathbb{GF}[2^n]$ with linear communication using a particular cryptographic resource, then we can also evaluate linear-size boolean circuits with linear communication using the same cryptographic resource. In particular, we provide the first construction that computes a linear number of oblivious transfers with linear communication complexity from the computational hardness assumptions like noisy Reed-Solomon codewords are pseudorandom, or arithmetic-analogues of LPN-style assumptions. Next, we highlight the potential of our result for other applications to MPC by constructing the first correlation extractor that has $1/2$ resilience and produces a linear number of oblivious transfers.
Most secure computation protocols can be effortlessly adapted to offload a significant fraction of their computationally and cryptographically expensive components to an offline phase so that the parties can run a fast online phase and perform their intended computation securely. During this offline phase, parties generate private shares of a sample generated from a particular joint distribution, referred to as the correlation. These shares, however, are susceptible to leakage attacks by adversarial parties, which can compromise the security of the entire secure computation protocol. The objective, therefore, is to preserve the security of the honest party despite the leakage performed by the adversary on her share.
Prior solutions, starting with $n$-bit leaky shares, either used 4 messages or enabled the secure computation of only sub-linear size circuits. Our work presents the first 2-message secure computation protocol for 2-party functionalities that have $\Theta(n)$ circuit-size despite $\Theta(n)$-bits of leakage, a qualitatively optimal result. We compose a suitable 2-message secure computation protocol in parallel with our new 2-message correlation extractor. Correlation extractors, introduced by Ishai, Kushilevitz, Ostrovsky, and Sahai (FOCS 2009) as a natural generalization of privacy amplification and randomness extraction, recover "fresh" correlations from the leaky ones, which are subsequently used by other cryptographic protocols. We construct the first 2-message correlation extractor that produces $\Theta(n)$-bit fresh correlations even after $\Theta(n)$-bit leakage.
Our principal technical contribution, which is of potential independent interest, is the construction of a family of multiplication-friendly linear secret sharing schemes that is simultaneously a family of small-bias distributions. We construct this family by randomly "twisting then permuting" appropriate Algebraic Geometry codes over constant-size fields.
Correlated private randomness, or correlation in short, is a fundamental cryptographic resource that helps parties compute securely over their private data. An offline preprocessing step, which is independent of the eventual secure computation, generates correlated secret shares for the parties and the parties use these shares during the final secure computation step. However, these secret shares are vulnerable to leakage attacks.
Inspired by the quintessential problem of privacy amplification, Ishai, Kushilevitz, Ostrovsky, and Sahai (FOCS 2009) introduced the concept of correlation extractors. Correlation extractors are interactive protocols that take leaky correlations at input and produce secure independent copies of oblivious transfer (OT), the building blocks of secure computation protocols. Although their initial feasibility result is resilient to linear leakage and produces a linear number of "fresh" OTs, the constants involved are minuscule. The output of this correlation extractor can be used to perform only small secure computation tasks, because the number of OTs needed to evaluate a functionality securely is roughly proportional to its circuit size. Recently, Gupta, Ishai, Maji, and Sahai (CRYPTO 2015) constructed an extractor that is resilient to $1/4$ fractional leakage and has near-linear production rate. They also constructed an extractor from a large correlation that has $1/2$ fractional resilience but produces only one OT, which does not suffice to compute even constant size functionalities securely.
In this paper, we show the existence of a correlation that produces $n$-bit shares for the parties and allows the extraction of $n^{1-o(n)}$ secure OTs, despite $n/2$ bits of leakage. The key technical idea is to embed several multiplications over a field into one multiplication over an extension field. The packing efficiency of this embedding directly translates into the production rate of our correlation extractor. Our work establishes a connection between this problem and a rich vein of research in additive combinatorics on constructing dense sets of integers that are free of arithmetic progressions, a.k.a. 3-free sets. We introduce a new combinatorial problem that suffices for our multiplication embedding, and produces concrete embeddings that beat the efficiency of the embeddings inspired by the reduction to 3-free sets.
Finally, the paper introduces a graph-theoretic measure to upper-bound the leakage resilience of correlations, namely the simple partition number. This measure is similar in spirit to graph covering problems like the biclique partition number. If the simple partition number of a correlation is $2^\lambda$ then it is impossible to extract evcen one OT if parties can perform $\lambda$-bits of leakage. We compute tight estimates of the simple partition number of several correlations that are relevant to this paper, and, in particular, show that our extractor and the extractor for the large correlation by Gupta et al. have optimal leakage resilience and (qualitatively) optimal simulation error.
Interactive oracle proofs (IOPs) (Ben-Sasson et al., TCC 2016; Reingold et al., SICOMP 2021) have emerged as a powerful model for proof systems which generalizes both Interactive Proofs (IPs) and Probabilistically Checkable Proofs (PCPs). While IOPs are not any more powerful than PCPs from a complexity theory perspective, their potential to create succinct proofs and arguments has been demonstrated by many recent constructions achieving better parameters such as total proof length, alphabet size, and query complexity. In this work, we establish new results on the relationship between various notions of soundness for IOPs. First, we formally generalize the notion of round-by-round soundness (Canetti et al., STOC 2019) and round-by-round knowledge soundness (Chiesa et al., TCC 2019). Given this generalization, we then examine its relationship to the notions of generalized special soundness (Attema et al., CRYPTO 2021) and generalized special unsoundness (Attema et al., TCC 2022). We show that:
Interactive arguments, and their (succinct) non-interactive and zero-knowledge counterparts, have seen growing deployment in real world applications in recent years. Unfortunately, for large and complex statements, concrete proof generation costs can still be quite expensive. While recent work has sought to solve this problem by outsourcing proof computation to a group of workers in a privacy preserving manner, current solutions still require each worker to do work on roughly the same order as a single-prover solution.
We introduce the Honest Majority Multi-Prover (HMMP) model for interactive arguments. In these arguments, we distribute prover computation among $M$ collaborating, but distrusting, provers. All provers receive the same inputs and have no private inputs, and we allow any $t
We show that: (1) any interactive argument where the prover computation is suitably divisible into $M$ sub-computations can be transformed into an interactive argument in the HMMP model; and (2) arguments that are obtained via compiling polynomial interactive oracle proofs with polynomial commitment schemes admit HMMP model constructions that experience a (roughly) $1/M$ speedup over a single-prover solution. The transformation of (1) preserves computational (knowledge) soundness, zero-knowledge, and can be made non-interactive via the Fiat-Shamir transformation. The constructions of (2) showcase that there are efficiency gains in proof distribution when privacy is not a concern.
Last updated on 9/12/2024, 8:53:21 AM CDT.
This site uses NGINX and is built using Handlebars, Sass, and Skeleton Sass.
© 2021 Alexander R. Block