a BICI-SMI International PhD School
on the Modern Theory of Computation
3-29 July, 2005
The school is made possible by the generous financial support of SMI and fits in the general framework of the highly successful and reputed Cortona Weeks that the Scuola Matematica Interuniversitaria has been organizing for the past 30 years under the auspices of INDAM, the Italian Institute of Higher Mathematics.
Every popular conjecture in mathematics is subject to a large number of purported proofs, mostly incorrect. Editors of mathematical journals, when faced with a barrage of such "proofs", resort to a notion of "quick review" to weed out the incorrect ones. But does such a notion of a "quick review" make sense if the provers are malicious? Can it be used as a sound deterrent against clever cranks, who may construct long "proofs" which hide errors in subtle ways?
Recent research in the theory of computer science, developed over the 1990s and continuing into this decade, has designed a novel format for writing proofs which can provably satisfy the goals of a "quick review". The new formats allow the verification procedure to be probabilistic, and allow a small probability of false positives (acceptance of invalid proofs), where the probability is taken over the random coins tossed by the verifier, and is not a function of the theorem or its purported proof. The new formats, and the associated proof verifying mechanisms are termed "Probabilistically Checkable Proofs (PCPs)". Their design and analysis relies on a variety of connections to Algebra over finite fields, Error-correcting codes, Probabilistic methods in computation, and Analysis of finite-valued functions.
In this four week lecture sequence, researchers will see the details of such constructions. The contents of the four week course will be split into two self-contained portions. The first two weeks will cover the fundamental definitions, and highlight some of the significant parameters, and then prove the "PCP Theorem" which shows how to construct PCPs whose size is only larger than that of any classical proof by a polynomial factor, while allowing verification by examining only a constant number of bits of the proof. The second two weeks will involve a closer look at some of the parameters of interest in PCP and will be devoted to results optimizing these parameters. The second week will assume knowledge of some of the basic definitions from the first week, and use the statement of the PCP theorem. (A short handout summarizing these will be made available shortly.)
Note: The second module will be taught jointly with Luca Trevisan, July 17-29
The well-known P vs NP conjecture roughly states that literally thousands of naturally occurring computational problems, the so-called NP-hard problems, do not admit fast algorithms. This conjecture is now widely considered to be one of the outstanding open problems of Mathematics and was included in the list of the Millenium Problems. The majority of these NP-hard problems are optimization problems for which one is to compute the minimum (or maximum) of a function defined on the (huge) search space associated with each input instance. In the absence of algorithms to compute these optima exactly, a natural question to ask is whether it is possible to approximate these functions quickly (i.e. by means of polynomial-time algorithms). This is the starting point of the rich theory of approximation algorithms for NP-hard problems. The general aim of this theory is to classify discrete optimization problems with respect to approximation. That is, for a given problem one would like to either exhibit a polynomial-time approximation algorithm or show that none exist (or at least to prove a relative impossibility result of the kind: if this problem were approximable within such and such bounds then P=NP). For many years the state of this theory was rather unsatisfactory, characterized as it was by the almost complete lack of results on the impossibility front and by a a collection of tricks on the algorithmic front, with very little in the way of unifying frameworks.
All this changed quite dramatically in the early '90s. Thanks to the PCP Theorem, that students will study in the course on Probabilistically Checkable Proofs, it is now possible to obtain in a unified way scores of impossibility results. From the point of view of algorithms the introduction of LP-based methods (LP relaxations, Randomized Rounding, Primal-dual algorithms etc.) now allows the systematic derivation of polynomial-time approximation algorithms for a large variety of problems. This course will be an introduction to these ideas and techniques.
In computer science, there are some important problems for which randomized algorithms are simpler and faster than the best known deterministic ones. Similarly, in combinatorics, there are some objects (e.g., expander graphs, Ramsey graphs, error-correcting codes, and many others) whose existence is easy to prove using the probabilistic method but for which we only have difficult and sub-optimal explicit constructions.
Perhaps surprisingly, work in the past 20+ years on "hardness versus randomness" suggests that every probabilistic algorithm may be simulated deterministically with comparable efficiency. In particular, this is true under certain plausible complexity-theoretic assumptions, such as that random integers are hard to factor (but much weaker assumptions suffice). Under the same assumptions, every object whose existence can be proved with the probabilistic method has an efficient construction (provided that one can efficiently recognize such an object given one).
These predictions have been confirmed by the recent deterministic algorithm for testing primality in polynomnial time and by recent progress on combinatorial constructions. While the primality testing algorithm is somewhat independent of the work on "hardness versus randomness," there is a deep connection between such work and some recent combinatorial constructions.
In this two-weeks couse we will begin by reviewing some examples of randomized algorithms (checking polynomial identities, random walks on graphs) and of probabilistic constructions (Ramsey graphs and expander graphs) and we will see a first instantiation of the hardness-versus-approach: the Blum-Micali-Yao construction of pseudorandom generators from "one-way" permutations. We will then see the Nisan-Wigderson construction of a pseudorandom generator under considerably weaker complexity assumption, and then show how to further weaken those assumptions by proving a connection between the average-case complexity and the worst-case complexity of certain problems. Moving on to combinatorial constructions, we will define randomnness extractors, discuss their relation to expander graphs, and give a construction of randomness extractors based on the Nisan-Wigderson pseudorandom generator construction. Time permitting, we will see the zig-zag product approach to the construction of expanders and how this is used back in complexity theory to give constructions of pseudorandom walks on graphs, with various applications.
You can find more information here
|Scientific organization||BICI Bertinoro International Center for Informatics|