Below is a "wordle" generated using the key words in the titles of my papers (July 2014). See also my page on DBLP and Google Scholar for a list of publications and for bibtex data.
Manuscripts
- Predictable Arguments of Knowledge (with Antonio Faonio and Jesper Buus Nielsen).
+[abstract] [full version]
We initiate a formal investigation on the power of predictability for argument of knowledge systems for NP. Specifically, we consider private-coin argument systems where the answer of the prover can be predicted, given the private randomness of the verifier; we call such protocols Predictable Arguments of Knowledge (PAoK).
Our study encompasses a full characterization of PAoK, showing that such arguments can be made extremely laconic, with the prover sending a single bit, and assumed to have only one round (i.e., two messages) of communication without loss of generality.
We additionally explore PAoK satisfying additional properties (including zero-knowledge and the possibility of re-using the same challenge across multiple executions with the prover), present several constructs of PAoK relying on different cryptographic tools, and discuss applications to cryptography.
Conference Papers
2016
- Chosen-Ciphertext Security from Subset Sum (with Sebastian Faust and Daniel Masny). Proceedings of PKC 2016, 35-46 LNCS 9614.
+[abstract] [full version]
We construct a public-key encryption (PKE) scheme whose security is polynomial-time equivalent to the hardness of the Subset Sum problem. Our scheme achieves the standard notion of indistinguishability against chosen-ciphertext attacks (IND-CCA) and can be used to encrypt messages of arbitrary polynomial length, improving upon a previous construction by Lyubashevsky, Palacio, and Segev (TCC 2010) which achieved only the weaker notion of semantic security (IND-CPA) and whose concrete security decreases with the length of the message being encrypted.
At the core of our construction is a trapdoor technique which originates in the work of Micciancio and Peikert (Eurocrypt 2012). - Non-Malleable Encryption: Simpler, Shorter, Stronger (with Sandro Coretti, Yevgeniy Dodis and Björn Tackmann). Proceedings of TCC 2016-A, 306-335 LNCS 9562.
+[abstract] [full version]
In a seminal paper, Dolev et al. (STOC'91) introduced the notion of non-malleable encryption (NM-CPA). This notion is very intriguing since it suffices for many applications of chosen-ciphertext secure encryption (IND-CCA), and, yet, can be generically built from semantically secure (IND-CPA) encryption, as was shown in the seminal works by Pass et al. (CRYPTO'06) and by Choi et al. (TCC'08), the latter of which provided a black-box construction. In this paper we investigate three questions related to NM-CPA security:
- Can the rate of the construction by Choi et al. of NM-CPA from IND-CPA be improved?
- Is it possible to achieve multi-bit NM-CPA security more efficiently from a single-bit NM-CPA scheme than from IND-CPA?
- Is there a notion stronger than NM-CPA that has natural applications and can be achieved from IND-CPA security?
We answer all three questions in the positive. First, we improve the rate in the construction of Choi et al. by a factor O(k), where k is the security parameter. Still, encrypting a message of size O(k) would require ciphertext and keys of size O(k^2) times that of the IND-CPA scheme, even in our improved scheme. Therefore, we show a more efficient domain extension technique for building a k-bit NM-CPA scheme from a single-bit NM-CPA scheme with keys and ciphertext of size O(k) times that of the NM-CPA one-bit scheme. To achieve our goal, we define and construct a novel type of continuous non-malleable code (NMC), called secret-state NMC, as we show that standard continuous NMCs are not enough for the natural "encode-then-encrypt-bit-by-bit" approach to work.
Finally, we introduce a new security notion for public-key encryption (PKE) that we dub non-malleability under (chosen-ciphertext) self-destruct attacks (NM-SDA). After showing that NM-SDA is a strict strengthening of NM-CPA and allows for more applications, we nevertheless show that both of our results---(faster) construction from IND-CPA and domain extension from one-bit scheme---also hold for our stronger NM-SDA security. In particular, the notions of IND-CPA, NM-CPA, and NM-SDA security are all equivalent, lying (plausibly, strictly?) below IND-CCA security.
2015
- (De-)Constructing TLS 1.3 (with Markulf Kohlweiss, Ueli Maurer, Cristina Onete and Björn Tackmann). Proceedings of Indocrypt 2015, 85-102 LNCS 9462.
+[abstract] [full version]
SSL/TLS is one of the most widely deployed cryptographic protocols on the Internet. It is used to protect the confidentiality and integrity of transmitted data in various client-server applications. The currently specified version is TLS 1.2, and its security has been analyzed extensively in the cryptographic literature. The IETF working group is actively developing a new version, TLS 1.3, which is designed to address several flaws inherent to previous versions.
In this paper, we analyze the security of a slightly modified version of the current TLS 1.3 draft. (We do not encrypt the server's certificate.) Our security analysis is performed in the constructive cryptography framework. This ensures that the resulting security guarantees are composable and can readily be used in subsequent protocol steps, such as password-based user authentication over a TLS-based communication channel in which only the server is authenticated. Most steps of our proof hold in the standard model, with the sole exception that the key derivation function HKDF is used in a way that has a proof only in the random-oracle model. Beyond the technical results on TLS 1.3, this work also exemplifies a novel approach towards proving the security of complex protocols by a modular, step-by-step decomposition, in which smaller sub-steps are proved in isolation and then the security of the protocol follows by the composition theorem. - Secure Data Sharing and Processing in Heterogeneous Clouds (with Bojan Suzic, Andreas Reiter, Florian Reimair, and Baldur Kubo). Proceedings of the 1st International Conference on Cloud Forward: From Distributed to Complete Computing, 116-126.
+[abstract] [link]
The extensive cloud adoption among the European Public Sector Players empowered them to own and operate a range of cloud infrastructures. These deployments vary both in the size and capabilities, as well as in the range of employed technologies and processes. The public sector, however, lacks the necessary technology to enable effective, interoperable and secure integration of a multitude of its computing clouds and services. In this work we focus on the federation of private clouds and the approaches that enable secure data sharing and processing among the collaborating infrastructures and services of public entities.
We investigate the aspects of access control, data and security policy languages, as well as cryptographic approaches that enable fine-grained security and data processing in semi-trusted environments. We identify the main challenges and frame the future work that serve as an enabler of interoperability among heterogeneous infrastructures and services. Our goal is to enable both security and legal conformance as well as to facilitate transparency, privacy and effectivity of private cloud federations for the public sector needs. - Subversion-Resilient Signature Schemes (with Giuseppe Ateniese and Bernardo Magri). Proceedings of ACM CCS 2015, 364-375.
+[abstract] [full version]
We provide a formal treatment of security of digital signatures against subversion attacks (SAs). Our model of subversion generalizes previous work in several directions, and is inspired by the proliferation of software attacks (e.g., malware and buffer overflow attacks), and by the recent revelations of Edward Snowden about intelligence agencies trying to surreptitiously sabotage cryptographic algorithms. The main security requirement we put forward demands that a signature scheme should remain unforgeable even in the presence of an attacker applying SAs (within a certain class of allowed attacks) in a fully-adaptive and continuous fashion. Previous notions---e.g., the notion of security against algorithm-substitution attacks introduced by Bellare et al. (CRYPTO '14) for symmetric encryption---were non-adaptive and non-continuous.
In this vein, we show both positive and negative results for the goal of constructing subversion-resilient signature schemes.- Negative results. As our main negative result, we show that a broad class of randomized signature schemes is unavoidably insecure against SAs, even if using just a single bit of randomness. This improves upon earlier work that was only able to attack schemes with larger randomness space. When designing our new attack we consider undetectability as an explicit adversarial goal, meaning that the end-users (even the ones knowing the signing key) should not be able to detect that the signature scheme was subverted.
- Positive results. We complement the above negative results by showing that signature schemes with *unique* signatures are subversion-resilient against all attacks that meet a basic undetectability requirement. A similar result was shown by Bellare et al. for symmetric encryption, who proved the necessity to rely on *stateful* schemes; in contrast unique signatures are *stateless*, and in fact they are among the fastest and most established digital signatures available.
We finally show that it is possible to devise signature schemes secure against arbitrary tampering with the computation, by making use of an un-tamperable cryptographic reverse firewall (Mironov and Stephens-Davidowitz, EUROCRYPT '15), i.e., an algorithm that "sanitizes" any signature given as input (using only public information). The firewall we design allows to successfully protect so-called re-randomizable signature schemes (which include unique signatures as special case).
As an additional contribution, we extend our model to consider multiple users and show implications and separations among the various notions we introduced. While our study is mainly theoretical, due to its strong practical motivation, we believe that our results have important implications in practice and might influence the way digital signature schemes are selected or adopted in standards and protocols. - Mind Your Coins: Fully Leakage-Resilient Signatures with Graceful Degradation (with Antonio Faonio and Jesper Buus Nielsen). Proceedings of ICALP 2015, 456-468 LNCS 9134.
+[abstract] [full version]
We construct new leakage-resilient signature schemes. Our schemes remain unforgeable against an adversary leaking arbitrary (yet bounded) information on the entire state of the signer (sometimes known as fully leakage resilience).
The main feature of our constructions, is that they offer a graceful degradation of security in situations where standard existential unforgeability is impossible. This property was recently put forward by Nielsen et al. (PKC 2014) to deal with settings in which the secret key is much larger than the size of a signature. One remarkable such case is the so-called Bounded Retrieval Model (BRM), where one intentionally inflates the size of the secret key while keeping constant the signature size and the computational complexity of the scheme.
Our main constructions have leakage rate 1-o(1), and are proven secure in the standard model. We additionally give a construction in the BRM, relying on a random oracle. All of our schemes are described in terms of generic building blocks, but also admit efficient instantiations under fairly standard number-theoretic assumptions. Finally, we explain how to extend some of our schemes to the setting of noisy leakage, where the only restriction on the leakage functions is that the output does not decrease the min-entropy of the secret key by too much. - The Chaining Lemma and its Application (with Ivan Damgård, Sebastian Faust and Pratyay Mukherjee). Proceedings of ICITS 2015, 181-196 LNCS 9063.
+[abstract] [full version]
We present a new information theoretic result which we call the Chaining Lemma. It considers a so-called "chain" of random variables, defined by a source distribution X0 with high min-entropy and a number (say, t in total) of arbitrary functions (T1,....Tt) which are applied in succession to that source to generate the chain X0 → X1 → ... → Xt such that Xi = Ti(Xi-1). Intuitively, the Chaining Lemma guarantees that, if the chain is not too long, then either (i) the entire chain is "highly random", in that every variable has high min-entropy; or (ii) it is possible to find a point j (1 ≤ j ≤ t) in the chain such that, conditioned on the end of the chain the preceding part remains highly random.
We believe this is an interesting information-theoretic result which is intuitive but nevertheless requires rigorous case-analysis to prove. We believe that it will find applications in cryptography. We give an example of this, namely we show an application of the lemma to protect essentially any cryptographic scheme against memory-tampering attacks. We allow several tampering requests, the tampering functions can be arbitrary, however, they must be chosen from a bounded size set of functions that is fixed a priori. - Entangled Encodings and Data Entanglement (Extended Abstract) (with Giuseppe Ateniese, Özgür Dagdelen and Ivan Damgård). Proceedings of AsiaCCS 2015 (Workshop on Security in Cloud Computing), 3-12.
+[abstract] [full version]
We introduce a new cryptographic tool that we dub entangled encoding scheme. An entangled encoding allows a set of users to encode their files into a single digital "clew" such that the following two properties are satisfied. (1) Privacy: The resulting encoding reveals no information about the files contained inside the clew; (2) All-or-nothing integrity (AONI): It is impossible to modify or delete any significant part of the encoding without affecting all files contained in the clew.
We provide a concrete instantiation of an entangled encoding scheme with unconditional security, based on polynomial interpolation over a finite field.
Finally, we show an appealing application of entangled encodings to the setting of secure cloud storage, where a set of users store their files at a potentially malicious cloud provider and want to ensure that their data remain safe and unblemished. - From Single-Bit to Multi-Bit Public-Key Encryption via Non-Malleable Codes (with Sandro Coretti, Ueli Maurer and Björn Tackmann). Proceedings of TCC 2015, 532-560 LNCS 9014.
+[abstract] [full version]
One approach towards basing public-key encryption (PKE) schemes on weak and credible assumptions is to build "stronger" or more general schemes generically from "weaker" or more restricted ones. One particular line of work in this context was initiated by Myers and shelat (FOCS '09) and continued by Hohenberger, Lewko, and Waters (Eurocrypt '12), who provide constructions of multi-bit CCA-secure PKE from single-bit CCA-secure PKE.
It is well-known that encrypting each bit of a plaintext string independently is not chosen-ciphertext secure---the resulting scheme is malleable. This paper analyzes the conceptually simple approach of applying a suitable non-malleable code (Dziembowski et al., ICS '10) to the plaintext and subsequently encrypting the resulting codeword bit-by-bit. An attacker's ability to make multiple decryption queries requires that the underlying code be continuously non-malleable (Faust et al., TCC '14). This flavor of non-malleable codes can only be achieved if the decoder is allowed to "self-destruct" when it processes an invalid encoding. The resulting PKE scheme inherits this property and therefore only achieves a weaker variant of chosen-ciphertext security where the decryption becomes dysfunctional once the attacker submits an invalid ciphertext.
We first show that the above approach based on non-malleable codes indeed yields a solution to the problem of domain extension for public-key encryption where the decryption may self-destruct, provided that the underlying code is continuously non-malleable against a reduced form of bit-wise tampering. This statement is shown by combining a simple information-theoretic argument with the constructive cryptography perspective on PKE (Coretti et al., Asiacrypt '13). Then, we prove that the code of Dziembowski et al. is actually already continuously non-malleable against (full) bit-wise tampering; this constitutes the first information-theoretically secure continuously non-malleable code, a technical contribution that we believe is of independent interest. Compared to the previous approaches to PKE domain extension, our scheme is more efficient and intuitive, at the cost of not achieving full CCA security. Our result is also one of the first applications of non-malleable codes in a context other than memory tampering. - A Tamper and Leakage Resilient von Neumann Architecture (with Sebastian Faust, Pratyay Mukherjee and Jesper Buus Nielsen). Proceedings of PKC 2015, 579-603 LNCS 9020.
+[abstract] [full version]
We present a universal framework for tamper and leakage resilient computation on a random access machine (RAM). The RAM has one CPU that accesses a storage, which we call the disk. The disk is subject to leakage and tampering. So is the bus connecting the CPU to the disk. We assume that the CPU is leakage and tamper-free. For a fixed value of the security parameter, the CPU has constant size. Therefore the code of the program to be executed is stored on the disk, i.e., we consider a von Neumann architecture. The most prominent consequence of this is that the code of the program executed will be subject to tampering.
We construct a compiler for this architecture which transforms any keyed primitive into a RAM program where the key is encoded and stored on the disk along with the program to evaluate the primitive on that key. Our compiler only assumes the existence of a so-called continuous non-malleable code, and it only needs black-box access to such a code. No further (cryptographic) assumptions are needed. This in particular means that given an information theoretic code, the overall construction is information theoretic secure.
Although it is required that the CPU is tamper and leakage proof, its design is independent of the actual primitive being computed and its internal storage is non-persistent, i.e., all secret registers are reset between invocations. Hence, our result can be interpreted as reducing the problem of shielding arbitrary complex computations to protecting a single, simple yet universal component.
2014
- A Multi-Party Protocol for Privacy-Preserving Cooperative Linear Systems of Equations (with Özgür Dagdelen). Proceedings of BalkanCryptSec 2014, 161-172 LNCS 9024..
+[abstract]
The privacy-preserving cooperative linear system of equations (PPC-LSE) problem is an important scientific problem whose solutions find applications in many real-word scenarios, such as banking, manufacturing, and telecommunications. Roughly speaking, in PPC-LSE a set of parties want to jointly compute the solution to a linear system of equations without disclosing their own inputs. The linear system is built through the parties' inputs.
In this paper we design a novel protocol for PPC-LSE. Our protocol has simulation-based security in the semi-honest model, assuming that one of the participants is not willing to collude with other parties. Previously to our work, the only known solutions to PPC-LSE were for the two-party case, and the only known other protocol for the multi-party case was less efficient and proven secure in a weaker model. - A Second Look at Fischlin's Transformation (with Özgür Dagdelen). Proceedings of Africacrypt 2014, 356-376 LNCS 8469.
+[abstract] [full version] [slides (by Oezi)]
Fischlin's transformation is an alternative to the standard Fiat-Shamir transform to turn a certain class of public key identification schemes into digital signatures (in the random oracle model).
We show that signatures obtained via Fischlin's transformation are existentially unforgeable even in case the adversary is allowed to get arbitrary (yet bounded) information on the entire state of the signer (including the signing key and the random coins used to generate signatures). A similar fact was already known for the Fiat-Shamir transform, however, Fischlin's transformation allows for a significantly higher leakage parameter than Fiat-Shamir.
Moreover, in contrast to signatures obtained via Fiat-Shamir, signatures obtained via Fischlin enjoy a tight reduction to the underlying hard problem. We use this observation to show (via simulations) that Fischlin's transformation, usually considered less efficient, outperforms the Fiat-Shamir transform in verification time for a reasonable choice of parameters. In terms of signing Fiat-Shamir is faster for equal signature sizes. Nonetheless, our experiments show that the signing time of Fischlin's transformation becomes, e.g., 22% of the one via Fiat-Shamir if one allows the signature size to be doubled. - Efficient Non-Malleable Codes and Key-Derivation for Poly-Size Tampering Circuits (with Sebastian Faust, Pratyay Mukherjee and Daniel Wichs). Proceedings of IACR Eurocrypt 2014, 111-128 LNCS 8441.
+[abstract] [full version] [slides (by Pratyay)]
Non-malleable codes, defined by Dziembowski, Pietrzak and Wichs (ICS '10), provide roughly the following guarantee: if a codeword c encoding some message x is tampered to c' = f(c) such that c' ≠ c, then the tampered message x' contained in c' reveals no information about x. Non-malleable codes have applications to immunizing cryptosystems against tampering attacks and related-key attacks.
One cannot have an efficient non-malleable code that protects against all efficient tampering functions f. However, in this work we show "the next best thing": for any polynomial bound s given a-priori, there is an efficient non-malleable code that protects against all tampering functions f computable by a circuit of size s. More generally, for any family of tampering functions F of size |F| ≤ 2s, there is an efficient non-malleable code that protects against all f ∈ F. The rate of our codes, defined as the ratio of message to codeword size, approaches 1. Our results are information-theoretic and our main proof technique relies on a careful probabilistic method argument using limited independence. As a result, we get an efficiently samplable family of efficient codes, such that a random member of the family is non-malleable with overwhelming probability. Alternatively, we can view the result as providing an efficient non-malleable code in the "common reference string" (CRS) model.
We also introduce a new notion of non-malleable key derivation, which uses randomness x to derive a secret key y = h(x) in such a way that, even if x is tampered to a different value x' = f(x), the derived key y' = h(x') does not reveal any information about y. Our results for non-malleable key derivation are analogous to those for non-malleable codes.
As a useful tool in our analysis, we rely on the notion of "leakage-resilient storage" of Davi, Dziembowski and Venturi (SCN '10) and, as a result of independent interest, we also significantly improve on the parameters of such schemes. - Leakage-Resilient Signatures with Graceful Degradation (with Jesper Buus Nielsen and Angela Zottarel). Proceedings of PKC 2014, 362-379 LNCS 8383.
+[abstract] [full version] [slides (by Angela)]
We investigate new models and constructions which allow leakage-resilient signatures secure against existential forgeries, where the signature is much shorter than the leakage bound. Current models of leakage-resilient signatures against existential forgeries demand that the adversary cannot produce a new valid message/signature pair (m, σ) even after receiving some λ bits of leakage on the signing key. If |σ| ≤ λ, then the adversary can just choose to leak a valid signature σ, and hence signatures must be larger than the allowed leakage, which is impractical as the goal often is to have large signing keys to allow a lot of leakage.
We propose a new notion of leakage-resilient signatures against existential forgeries where we demand that the adversary cannot produce n = ⌊λ/|σ|⌋+1 distinct valid message/signature pairs (m1, σ1), …, (mn,σn) after receiving λ bits of leakage. If λ = 0, this is the usual notion of existential unforgeability. If 1 < λ < |σ|, this is essentially the usual notion of existential unforgeability in the presence of leakage. In addition, for λ ≥ |σ| our new notion still guarantees the best possible, namely that the adversary cannot produce more forgeries than he could have leaked, hence graceful degradation.
Besides the game-based notion hinted above, we also consider a variant which is more simulation-based, in that it asks that from the leakage a simulator can "extract" a set of n-1 messages (to be thought of as the messages corresponding to the leaked signatures), and no adversary can produce forgeries not in this small set. The game-based notion is easier to prove for a concrete instantiation of a signature scheme. The simulation-based notion is easier to use, when leakage-resilient signatures are used as components in larger protocols.
We prove that the two notion are equivalent and present a generic construction of signature schemes meeting our new notion and a concrete instantiation under fairly standard assumptions. We further give an application, to leakage-resilient identification. - Continuous Non-malleable Codes (with Sebastian Faust, Pratyay Mukherjee and Jesper Buus Nielsen). Proceedings of TCC 2014, 465-488 LNCS 8349.
+[abstract] [full version] [slides (by Pratyay)]
Non-malleable codes are a natural relaxation of error correcting/detecting codes that have useful applications in the context of tamper resilient cryptography. Informally, a code is non-malleable if an adversary trying to tamper with an encoding of a given message can only leave it unchanged or modify it to the encoding of a completely unrelated value.
This paper introduces an extension of the standard non-malleability security notion (so-called continuous non-malleability), where we allow the adversary to tamper continuously with an encoding. This is in contrast to the standard notion of non-malleable codes where the adversary only is allowed to tamper a single time with an encoding. We show how to construct continuous non-malleable codes in the common split-state model where an encoding consist of two parts and the tampering can be arbitrary but has to be independent with both parts. Our main contributions are outlined below:- We propose a new uniqueness requirement of split-state codes which states that it is computationally hard to find two codewords C =(X0,X1) and C' = (X0,X1') such that both codwords are valid, but X0 is the same in both C and C'. A simple attack shows that uniqueness is necessary to achieve continuous non-malleability in the split-state model. Moreover, we illustrate that non of the existing constructions satisfies our uniqueness property and hence is not secure in the continuous setting.
- We construct a split-state code satisfying continuous non-malleability. Our scheme is based on the inner product function, collision-resistant hashing and non-interactive zero-knowledge proofs of knowledge and requires an untamperable common reference string.
- We apply continuous non-malleable codes to protect arbitrary cryptographic primitives against tampering attacks. Previous applications of non-malleable codes in this setting required to perfectly erase the entire memory after each execution and and required the adversary to be restricted in memory. We show that continuous non-malleable codes avoid these restrictions.
2013
- Bounded Tamper Resilience: How to go beyond the Algebraic Barrier (with Ivan Damgård, Sebastian Faust and Pratyay Mukherjee). Proceedings of IACR Asiacrypt 2013, 140-160 LNCS 8270.
+[abstract] [full version] [slides (by Pratyay)]
Related key attacks (RKAs) are powerful cryptanalytic attacks where an adversary can change the secret key and observe the effect of such changes at the output. The state of the art in RKA security protects against an a-priori unbounded number of certain algebraic induced key relations, e.g., affine functions or polynomials of bounded degree. In this work, we show that it is possible to go beyond the algebraic barrier and achieve security against arbitrary key relations, by restricting the number of tampering queries the adversary is allowed to ask for. The latter restriction is necessary in case of arbitrary key relations, as otherwise a generic attack of Gennaro et al. (TCC 2004) shows how to recover the key of almost any cryptographic primitive. We describe our contributions in more detail below.
- We show that standard ID and signature schemes constructed from a large class of Σ-protocols (including the Okamoto scheme, for instance) are secure even if the adversary can arbitrarily tamper with the prover's state a bounded number of times and obtain some bounded amount of leakage. Interestingly, for the Okamoto scheme we can allow also independent tampering with the public parameters.
- We show a bounded tamper and leakage resilient CCA secure public key cryptosystem based on the DDH assumption. We first define a weaker CPA-like security notion that we can instantiate based on DDH, and then we give a general compiler that yields CCA-security with tamper and leakage resilience. This requires a public tamper-proof common reference string.
- Finally, we explain how to boost bounded tampering and leakage resilience (as in 1. and 2. above) to continuous tampering and leakage resilience, in the so-called floppy model where each user has a personal hardware token (containing leak- and tamper-free information) which can be used to refresh the secret key.
We believe that bounded tampering is a meaningful and interesting alternative to avoid known impossibility results and can provide important insights into the security of existing standard cryptographic schemes. - Outsourced Pattern Matching (with Sebastian Faust and Carmit Hazay). Proceedings of ICALP 2013, 545-556 LNCS 7966.
+[abstract] [full version] [slides]
In secure delegatable computation, computationally weak devices (or clients) wish to outsource their computation and data to an untrusted server in the cloud. While most earlier work considers the general question of how to securely outsource any computation to the cloud server, we focus on concrete and important functionalities and give the first protocol for the pattern matching problem in the cloud.
Loosely speaking, this problem considers a text T that is outsourced to the cloud S by a client CT. In a query phase, clients C1, ..., Cl run an efficient protocol with the server S and the client CT in order to learn the positions at which a pattern of length m matches the text (and nothing beyond that). This is called the outsourced pattern matching problem and is highly motivated in the context of delegatable computing since it offers storage alternatives for massive databases that contain confidential data (e.g., health related data about patient history).
Our constructions offer simulation-based security in the presence of semi-honest and malicious adversaries (in the random oracle model) and limit the communication in the query phase to O(m) bits plus the number of occurrences---which is optimal. In contrast to generic solutions for delegatable computation, our schemes do not rely on fully homomorphic encryption but instead uses novel ideas for solving pattern matching, based on efficiently solvable instances of the subset sum problem. - Anonymity-preserving Public-Key Encryption: A Constructive Approach (with Markulf Kohlweiss, Ueli Maurer, Cristina Onete and Björn Tackmann). Proceedings of PETS 2013, 19-39 LNCS 7981.
+[abstract] [full version] [slides (by Markulf)]
A receiver-anonymous channel allows a sender to send a message to a receiver without an adversary learning for whom the message is intended. Wireless broadcast channels naturally provide receiver anonymity, as does multi-casting one message to a receiver population containing the intended receiver. While anonymity and confidentiality appear to be orthogonal properties, making anonymous communication confidential is more involved than one might expect, since the ciphertext might reveal which public key has been used to encrypt. To address this problem, public-key cryptosystems with enhanced security properties have been proposed.
This paper investigates constructions as well as limitations for preserving receiver anonymity when using public-key encryption (PKE). We use the constructive cryptography approach by Maurer and Renner and interpret cryptographic schemes as constructions of a certain ideal resource (e.g. a confidential anonymous channel) from given real resources (e.g. a broadcast channel). We define appropriate anonymous communication resources and show that a very natural resource can be constructed by using a PKE scheme which fulfills three properties that appear in cryptographic literature (IND-CCA, key-privacy, weak robustness). We also show that a desirable stronger variant, preventing the adversary from selective “trial-deliveries” of messages, is unfortunately unachievable by any PKE scheme, no matter how strong. The constructive approach makes the guarantees achieved by applying a cryptographic scheme explicit in the constructed (ideal) resource; this specifies the exact requirements for the applicability of a cryptographic scheme in a given context. It also allows to decide which of the existing security properties of such a cryptographic scheme are adequate for the considered scenario, and which are too weak or too strong. Here, we show that weak robustness is necessary but that so-called strong robustness is unnecessarily strong in that it does not construct a (natural) stronger resource. - On the Connection between Leakage Tolerance and Adaptive Security (with Jesper Buus Nielsen and Angela Zottarel). Proceedings of PKC 2013, 497-515 LNCS 7778.
+[abstract] [full version] [slides]
We revisit the context of leakage-tolerant interactive protocols as defined by Bitanski, Canetti and Halevi (TCC 2012). Our contributions can be summarized as follows:
- For the purpose of secure message transmission, any encryption protocol with message space M and secret key space SK tolerating poly-logarithmic leakage on the secret state of the receiver must satisfy |SK| ≥ (1 - ε)|M| , for every 0 < ε ≤ 1, and if |SK|=|M|, then the scheme must use a fresh key pair to encrypt each message.
- More generally, we show that any n party protocol tolerates leakage of poly(log k) bits from one party at the end of the protocol execution, if and only if the protocol has passive adaptive security against an adaptive corruption of one party at the end of the protocol execution. This shows that as soon as a little leakage is tolerated, one needs full adaptive security.
- In case more than one party can be corrupted, we get that leakage tolerance is equivalent to a weaker form of adaptivity, which we call semi-adaptivity. Roughly, a protocol has semi-adaptive security if there exist a simulator which can simulate the internal state of corrupted parties, however, such a state is not required to be indistinguishable from a real state, only that it would have lead to the simulated communication.
All our results can be based on the solely assumption that collision-resistant function ensembles exist. - Rate-Limited Secure Function Evaluation: Definitions and Constructions (with Özgür Dagdelen and Payman Mohassel). Proceedings of PKC 2013, 461-478 LNCS 7778.
+[abstract] [full version] [slides (by Oezi)]
We introduce the notion of rate-limited secure function evaluation (RL-SFE). Loosely speaking, in an RL-SFE protocol participants can monitor and limit the number of distinct inputs (i.e., rate) used by their counterparts in multiple executions of an SFE, in a private and verifiable manner. The need for RL-SFE naturally arises in a variety of scenarios: e.g., it enables service providers to "meter" their customers' usage without compromising their privacy, or can be used to prevent oracle attacks against SFE constructions.
We consider three variants of RL-SFE providing different levels of security. As a stepping stone, we also formalize the notion of commit-first SFE (cf-SFE) wherein parties are committed to their inputs before each SFE execution. We provide compilers for transforming any cf-SFE protocol into each of the three RL-SFE variants. Our compilers are accompanied with simulation-based proofs of security in the standard model and show a clear tradeoff between the level of security offered and the overhead required. Moreover, motivated by the fact that in many client-server applications clients do not keep state, we also describe a general approach for transforming the resulting RL-SFE protocols into stateless ones.
As a case study, we take a closer look at the oblivious polynomial evaluation (OPE) protocol of Hazay and Lindell, show that it is commit-first and instantiate efficient rate-limited variants of it.
2012
- On the Non-malleability of the Fiat-Shamir Transform (with Sebastian Faust, Markulf Kohlweiss and Giorgia Azzurra Marson). Proceedings of Indocrypt 2012, 60-79 LNCS 7668.
+[abstract] [full version] [slides (by Giorgia)]
The Fiat-Shamir transform is a well studied paradigm for removing interaction from public-coin protocols. We investigate whether the resulting non-interactive zero-knowledge (NIZK) proof systems also exhibit non-malleability properties that have up to now only been studied for NIZK proof systems in the common reference string model: first, we formally define simulation soundness and a weak form of simulation extraction in the random oracle model (ROM). Second, we show that in the ROM the Fiat-Shamir transform meets these properties under lenient conditions.
A consequence of our result is that, in the ROM, we obtain truly efficient non malleable NIZK proof systems essentially for free. Our definitions are sufficient for instantiating the Naor-Yung paradigm for CCA2-secure encryption, as well as a generic construction for signature schemes from hard relations and simulation-extractable NIZK proof systems. These two constructions are interesting as the former preserves both the leakage resilience and key-dependent message security of the underlying CPA-secure encryption scheme, while the latter lifts the leakage resilience of the hard relation to the leakage resilience of the resulting signature scheme.
2011
- Tamper-Proof Circuits: How to Trade Leakage for Tamper-Resilience (with Sebastian Faust and Krzysztof Pietrzak). Proceedings of ICALP 2011, 391-402 LNCS 6755.
+[abstract] [full version] [slides]
Tampering attacks are cryptanalytic attacks on the implementation of cryptographic algorithms (e.g. smart cards), where an adversary introduces faults with the hope that the tampered device will reveal secret information. Inspired by the work of Ishai et al. [Eurocrypt'06], we propose a compiler that transforms any circuit into a new circuit with the same functionality, but which is resilient against a well-defined and powerful tampering adversary. More concretely, our transformed circuits remain secure even if the adversary can adaptively tamper with every wire in the circuit as long as the tampering fails with some probability δ > 0. This additional requirement is motivated by practical tampering attacks, where it is often difficult to guarantee the success of a specific attack.
Formally, we show that a q-query tampering attack against the transformed circuit can be "simulated" with only black-box access to the original circuit and log(q) bits of additional auxiliary information. Thus, if the implemented cryptographic scheme is secure against log(q) bits of leakage, then our implementation is tamper-proof in the above sense. Surprisingly, allowing for this small amount of information leakage---and not insisting on perfect simulability like in the work of Ishai et al.---allows for much more efficient compilers, which moreover do not require randomness during evaluation.
Similar to earlier work our compiler requires small, stateless and computation-independent tamper-proof gadgets. Thus, our result can be interpreted as reducing the problem of shielding arbitrary complex computation to protecting simple components. - Efficient Authentication from Hard Learning Problems (with Eike Kiltz, Krzysztof Pietrzak, David Cash and Abhishek Jain). Proceedings of IACR Eurocrypt 2011, 7-26 LNCS 6632. (Best paper award.)
+[abstract] [full version]
We construct efficient authentication protocols and message authentication codes (MACs) whose security can be reduced to the learning parity with noise (LPN) problem. Despite a large body of work - starting with the HB protocol of Hopper and Blum in 2001 - until now it was not even known how to construct an efficient authentication protocol from LPN which is secure against man-in-the-middle (MIM) attacks. A MAC implies such a (two-round) protocol.
2010
- Leakage Resilient Storage (with Francesco Davì and Stefan Dziembowski). Proceedings of Security and Cryptography for Networks (SCN) 2010, 121-137 LNCS 6280.
+[abstract] [full version]
We study a problem of secure data storage on hardware that may leak information. We introduce a new primitive, that we call leakage-resilient storage (LRS), which is an (unkeyed) scheme for encoding messages, and can be viewed as a generalization of the All-Or-Nothing Transform (AONT, Rivest 1997). The standard definition of AONT requires that it should be hard to reconstruct a message m if not all the bits of its encoding Encode(m) are known. LRS is defined more generally, with respect to a class Γ of functions. The security definition of LRS requires that it should be hard to reconstruct m even if some values g1(Encode(m)),...,gt(Encode(m)) are known (where g1,...,gt ∈ Γ), as long as the total length of g1(Encode(m)),...,gt(Encode(m)) is smaller than some parameter c. We construct an LRS scheme that is secure with respect to Γ being a set of functions that can depend only on some restricted part of the memory. More precisely: we assume that the memory is divided in 2 parts, and the functions in Γ can be just applied to one of these parts. We also construct a scheme that is secure if the cardinality of Γ is restricted (but still it can be exponential in the length of the encoding). This construction implies security in the case when the set Γ consists of functions that are computable by Boolean circuits of a small size. We also discuss the connection between the problem of constructing leakage-resilient storage and a theory of the compressibility of NP-instances.
2009
- Inadequacy of the Queue-Based Max-Weight Optimal Scheduler on Wireless Links with TCP Sources (with Andrea Baiocchi and Alfredo Todini). Proceedings of IEEE International Conference on Communication (ICC) 2009, 1-6 ICC '09.
+[abstract] [ieee link] [poster]
The interaction between wireless optimized scheduling algorithms and TCP congestion control mechanisms can have adverse effects on the performance of the system. We focus on the queue based max-weight (QBMW) scheduler, a scheduling strategy which is known to be throughput-optimal under unregulated traffic sources. We use fluid modeling to describe the time evolution of the congestion window size and of the wireless buffer, and show by numerical results that under TCP traffic sources the QBMW scheduling policy leads to a very unfair outcome, in which some users may be completely shut off. We also evaluate and discuss the performance achieved by other scheduling policies: the proportional fair (PF) scheduler, and the queue age (QA) scheduler, which takes account of the age of the packets stored in the wireless buffers.
Journal Papers
- Bounded Tamper Resilience: How to go beyond the Algebraic Barrier (with Ivan Damgård, Sebastian Faust and Pratyay Mukherjee). Journal of Cryptology. Full version of the Asiacrypt 2013 paper. DOI: 10.1007/s00145-015-9218-0.
+[abstract] [Springer link]
Related key attacks (RKAs) are powerful cryptanalytic attacks where an adversary can change the secret key and observe the effect of such changes at the output. The state of the art in RKA security protects against an a-priori unbounded number of certain algebraic induced key relations, e.g., affine functions or polynomials of bounded degree. In this work, we show that it is possible to go beyond the algebraic barrier and achieve security against arbitrary key relations, by restricting the number of tampering queries the adversary is allowed to ask for. The latter restriction is necessary in case of arbitrary key relations, as otherwise a generic attack of Gennaro et al. (TCC 2004) shows how to recover the key of almost any cryptographic primitive. We describe our contributions in more detail below.
- We show that standard ID and signature schemes constructed from a large class of Σ-protocols (including the Okamoto scheme, for instance) are secure even if the adversary can arbitrarily tamper with the prover's state a bounded number of times and obtain some bounded amount of leakage. Interestingly, for the Okamoto scheme we can allow also independent tampering with the public parameters.
- We show a bounded tamper and leakage resilient CCA secure public key cryptosystem based on the DDH assumption. We first define a weaker CPA-like security notion that we can instantiate based on DDH, and then we give a general compiler that yields CCA-security with tamper and leakage resilience. This requires a public tamper-proof common reference string.
- Finally, we explain how to boost bounded tampering and leakage resilience (as in 1. and 2. above) to continuous tampering and leakage resilience, in the so-called floppy model where each user has a personal hardware token (containing leak- and tamper-free information) which can be used to refresh the secret key.
We believe that bounded tampering is a meaningful and interesting alternative to avoid known impossibility results and can provide important insights into the security of existing standard cryptographic schemes. - Entangled Cloud Storage (with Giuseppe Ateniese, Özgür Dagdelen and Ivan Damgård). Future Generation Computer Systems - Special Issue on Cloud Cryptography. Full version of the AsiaCCS 2015 paper. DOI: ???.
+[abstract] [Elsevier link]
Entangled cloud storage (Aspnes et al., ESORICS 2004) enables a set of clients to "entangle" their files into a single clew} to be stored by a (potentially malicious) cloud provider. The entanglement makes it impossible to modify or delete significant part of the clew without affecting all files encoded in the clew. A clew keeps the files in it private but still lets each client recover his own data by interacting with the cloud provider; no cooperation from other clients is needed. At the same time, the cloud provider is discouraged from altering or overwriting any significant part of the clew as this will imply that none of the clients can recover their files. We put forward the first simulation-based security definition for entangled cloud storage, in the framework of universal composability (Canetti, FOCS 2001). We then construct a protocol satisfying our security definition, relying on an entangled encoding scheme based on privacy-preserving polynomial interpolation; entangled encodings were originally proposed by Aspnes et al. as useful tools for the purpose of data entanglement. As a contribution of independent interest we revisit the security notions for entangled encodings, putting forward stronger definitions than previous work (that for instance did not consider collusion between clients and the cloud provider). Protocols for entangled cloud storage find application in the cloud setting, where clients store their files on a remote server and need to be ensured that the cloud provider will not modify or delete their data illegitimately. Current solutions, e.g., based on Provable Data Possession and Proof of Retrievability, require the server to be challenged regularly to provide evidence that the clients' files are stored at a given time. Entangled cloud storage provides an alternative approach where any single client operates implicitly on behalf of all others, i.e., as long as one client's files are intact, the entire remote database continues to be safe and unblemished.