List of Talks (with abstracts)

  1. Alon Rosen (Inter Disciplinary Center, Israel)
    The Learning with Rounding Problem: Reductions and Applications [pdf]

    Abstract: In this talk I will survey a recently introduced cryptographic problem called Learning with Rounding (LWR). I will show reductions from and to the more well-established Learning with Errors (LWE) problem, and demonstrate the applicability of LWR to the construction of efficient Pseudorandom Functions and other cryptographic primitives.

    Based on Joint works with Abhishek Banerjee and Chris Peikert, and Andrej Bogdanov, Chiwang Chow and Siyao Guo.


  2. Arkadev Chattopadhyay (TIFR)
    Correlation bounds for constant-depth boolean circuits (Part I)

    Abstract: Proving correlation upper bounds against simple models of computation is a very natural and important research theme in computational complexity. What is fascinating is the following: say our goal is to prove that a function f is not deterministically efficiently computable in a given model. In many cases the only known way of proving that is by proving the seemingly much stronger result that efficient computation does not *even* correlate well with f.

    We will explore this theme mostly in the context of constant-depth circuits, where some interesting recent work has come out. We will do this in two parts. In the first part, we survey the landscape of known correlation bounds and some of their applications. We also outline some of the many open questions in the area. In the second part, we explain some of these arguments and techniques.

    [Joint session with Srikanth Srinivasan]


  3. Arnab Bhattacharyya (IISc)
    Every locally characterized affine-invariant property is testable

    Abstract: I will give an overview of our recent work giving a nearly tight characterization of one-sided testability for affine-invariant properties. For a finite field F, an affine-invariant property is a property of functions on F^n that is closed under taking affine transformations of the domain. Examples of locally characterized properties are low degree polynomials and Fourier-sparse functions. We show that if P is locally characterized (meaning, that there's a constant-sized witness for non-membership in the property), then there is a test that, given an input function f, makes a constant number of queries to f, always accepts if f satisfies P, and rejects with positive probability if the distance between f and P is nonzero.

    In the talk, I will concentrate on the techniques used to prove the result, instead of the details of the proof itself, since we believe the techniques should be of independent interest. For this purpose, I'll spend much of the talk proving the following result: for any fixed finite field F, if P(X_1, ..., X_n) is an n-variate polynomial with coefficients over F, then P is not the square of a degree 5 polynomial if and only if there exists a subspace H of dimension k such that the restriction of P to the subspace H is not the square of a degree 5 polynomial. Here, k is a constant independent of n (though our proof does not yield any explicit upper bound for it).

    Joint work with Eldar Fischer, Hamed Hatami, Pooya Hatami, and Shachar Lovett.


  4. Madhu Sudan (Microsoft Research, New England)
    Invariance in Property Testing [part I/pdf, part II/pdf]

    Abstract: The goal of Property Testing is to design highly efficient algorithms that sample very small portions of massive data to detect if the data satisfies some global property. As ambitious as this goal may sound, research in the past two decades has shown that a wide variety of properties, looking for statistical, combinatorial, graph-theoretic, or algebraic structure in data, can be tested surprisingly efficiently. In this talk we will explain how the ``invariance'' of the property has played a role, mostly implicitly and recently explicitly, in the success of property testing.

    A property is said to be invariant under a permutation pi if permuting the data points by this permutation leaves the property unchanged. The set of permutations that keep a property invariant forms a group under composition, and many of the above different themes in property testing can be classified based on the invariant group of the property.

    In the first part of the talk we survey some of the work done in testing linear properties over domains that are vector spaces over some field, and the property is invariant under the group of invertible affine transformations. In particular we show how this abstracts the most common algebraic error-correcting codes and how the most basic testing result applies to the abstract class. We also give some new properties that fall in this general class that outperform the classically studied codes in terms of locality.

    In the second part of the talk, we revisit the most central affine-invariant property, namely: Given a function f:F_q^n to F_q, where F_q denotes the finite field on q elements, test if f is a polynomial of degree at most d. Different solutions to this basic question, for different ranges of parameters n,q,d, have had immense applications in complexity theory. In this talk we focus on the case where q is a constant and d is slowly growing. We consider tha natural ``affine-invariance'' based test: Verify that f restrict to t-dimensional subspaces is a polynomial of degree at most d (where t is some small integer depending only on d and q). We show essentially optimal analysis of this test; and mention some recently discovered applications.

    First part is based on works of/with: Tali Kaufman, Shachar Lovett, Eli Ben-Sasson, Elena Grigorescu, Alan Guo, Swastik Kopparty, Ghid Maatouk, and Amir Shpilka.

    Second part based on works with: Arnab Bhattacharyya, Elad Haramaty, Swastik Kopparty, Noga Ron-Zewi, Grant Schoenebeck, Amir Shpilka, and David Zuckerman


  5. Mohit Singh (Microsoft Research, Redmond)
    Maximum Entropy Distributions and the Traveling Salesman Problem

    Abstract: In this talk, I will survey the use of maximum entropy distributions for approximation the traveling salesman problem. The algorithms arising from this approach are simple and build on classical algorithms for the problem. The analysis builds on a variety of ideas such as negative correlation properties of random spanning trees, structure of near minimum cuts of a graph, and the polyhedral theory of T-join and flow polytopes.
  6. Naveen Garg (IIT Delhi)
    Nearly maximum flows in nearly linear time in undirected graphs

    Abstract: I will be talking about a recent paper by Jonah Sherman. He introduces a new approach to the maximum flow problem in undirected, capacitated graphs using $\alpha$-\emph{congestion-approximators}: easy-to-compute functions that approximate the congestion required to route single-commodity demands in a graph to within a factor of $\alpha$. The algorithm maintains an arbitrary flow that may have some residual excess and deficits, while taking steps to minimize a potential function measuring the congestion of the current flow plus an over-estimate of the congestion required to route the residual demand. Since the residual term over-estimates, the descent process gradually moves the contribution to our potential function from the residual term to the congestion term, eventually achieving a flow routing the desired demands with nearly minimal congestion after $\tilde{O}(\alpha\eps^{-2}\log^2 n)$ iterations.

    The talk will be self contained.


  7. Nikhil Srivastava (Microsoft Research, India)
    Interlacing Families I : Bipartite Ramanujan Graphs of Every Degree [pdf]

    Abstract: Expander graphs are very sparse graphs which are nonetheless very well-connected, in the sense that their adjacency matrices have large spectral gap. There is a limit to how large this gap can be for a d-regular graph, and graphs which achieve the limit are called Ramanujan graphs. A beautiful number-theoretic construction of Lubotzky-Phillips-Sarnak and Margulis shows that infinite families of Ramanujan graphs exist for every d=p+1 where p is prime, leaving open the question of whether they exist for other degrees.

    We prove that there exist infinite families of bipartite Ramanujan graphs of every degree bigger than 2. We do this by proving a variant of a conjecture of Bilu and Linial about the existence of good 2-lifts of every graph. Our proof exploits a new technique for demonstrating the existence of useful combinatorial objects that we call the ``Method of Interlacing Polynomials''. The proofs are elementary and rely on simple facts from the theory of rela stable polynomials. In particular, they do not rely on number theory, and the talk should be accessible to a broad audience.

    Joint work with Adam Marcus and Dan Spielman.

    Interlacing Families II : Mixed Characteristic Polynomials and The Kadison-Singer Problem [pdf]

    Abstract: We further develop the method of interlacing families introduced in the first talk and use it to prove a spectral discrepancy theorem for matrices. Namely, we show that given any m x n matrix B in which each row has bounded 'influence', it is possible to partition the rows of B into two submatrices B_1 and B_2 such that each submatrix spectrally approximates the original matrix. This theorem is powerful because it is possible to encode lots of diverse objects as matrices, and being able to split matrices into well-behaved pieces implies that these other objects can also be split into well-behaved pieces. In particular, the theorem
    (1) implies that is possible to partition any well-connected undirected graph into sparse subgraphs which each approximate its cuts.
    (2) gives a quantitative strengthening of the well-known uncertainty principle, which says that a signal cannot be localized both in the time domain and the frequency domain.
    (3) settles the 1959 Kadison-Singer conjecture, which was a central question in operator theory motivated by issues in the mathematical foundations of quantum mechanics.

    The proof goes through an analysis of the largest roots of a family of polynomials that we call the "mixed characteristic polynomials" of a collection of matrices.

    Joint work with Adam Marcus and Dan Spielman.


  8. Nisheeth Vishnoi (Microsoft Research, India)
    Approximating Fundamental Functions and Their Algorithmic Applications I & II

    Abstract: These talks will be about uniform approximations to real functions such as exp(-x), 1/x, and x^k and how one can combine such approximations with powerful tools such as Spielman-Teng Laplacian solvers and Conjugate-Gradient type methods to design fast algorithms for problems such as simulating random walks, graph partitioning and semi-definite programming.

    The first talk will introduce concepts and results from approximation theory and the second one will talk about applications to algorithm design.

    Reference: N. Vishnoi, S. Sachdeva, Approximation Theory and the Design of Fast Algorithms [pdf]


  9. Pandu Rangan (IIT Madras)
    Provably secure ID based signature scheme [pdf]

    Abstract: This talk aims to introduce provable security through signature schemes. Although several signature schemes exist in the literature, designing a scheme with a tight reduction to the underlying hard problem remains as a challenging problem. We will point out why the existing schemes can not be used/extended for our purposes and why we need a novel approach for the same. We conclude the talk with some interesting open problems and research directions. The talk itself is presented in a tutorial style, in almost self contained manner. This talk is based on an award winning paper by my students Sharmila Deva Selvi and Sree Vivek.
  10. Pranab Sen (TIFR)
    Norm embeddings and quantum computation

    Abstract: Norm embeddings already have many applications in (classical) computer science. In the last few years, some novel connections have been made to quantum computation also. We shall see how Dvoretzky's theorem, a fundamental result in norm embeddings that states that a sphere according to some norm in a high dimensional space contains an approximate sphere according to \ell_2-norm along a large dimensional subspace, can be used to show that quantum channel capacity can be superadditive. That is, we shall see that there exist quantum channels where more than twice the amount of information can be transmitted using two copies. We shall next see how to efficiently implement a quantum Johnson-Lindenstrauss transform, that is, an orthogonal projection to a low dimensional subspace that scales the \ell_2-lengths of a fixed set of vectors by almost the same amount. Using this efficient transform, we get a toy application to private information retrieval.

    This talk will be in the nature of a survey talk. No familiarity with norm embeddings and only a brief familiarity with quantum computation will be assumed.

    Efficiently embedding \ell_2 in \ell_1 and quantum information locking

    Abstract: In this talk, we shall see another application to quantum computation of Dvoretzky's theorem for a norm closely related to the \ell_1 norm. We will see how one can efficiently quantise a classical algorithm by Indyk for embedding \ell_2 in \ell_1. This quantisation gives us a new and powerful notion of uncertainty relation, called metric uncertainty relation. Metric uncertainty relations allow us, amongst other things, to come up with the first efficient algorithms for information locking. Information locking is an inherently quantum cryptographic primitive whereby one can encrypt a uniformly distributed classical message into a quantum ciphertext with the help of a logarithmically small classical key such that a bonafide party with access to the key can correctly decrypt the ciphertext whereas an eavesdropper with limited quantum memory but unlimited classical computing power, who has no access to the key, gets almost no information about the message.

    This talk will be more technical than the survey talk on norm embeddings and quantum computation, but again, no familiarity with norm embeddings and only a brief familiarity with quantum computation will be assumed.


  11. Srikanth Srinivasan (IIT Bombay)
    Correlation bounds for constant-depth boolean circuits (Part II)

    Abstract: Proving correlation upper bounds against simple models of computation is a very natural and important research theme in computational complexity. What is fascinating is the following: say our goal is to prove that a function f is not deterministically efficiently computable in a given model. In many cases the only known way of proving that is by proving the seemingly much stronger result that efficient computation does not *even* correlate well with f.

    We will explore this theme mostly in the context of constant-depth circuits, where some interesting recent work has come out. We will do this in two parts. In the first part, we survey the landscape of known correlation bounds and some of their applications. We also outline some of the many open questions in the area. In the second part, we explain some of these arguments and techniques.

    [Joint session with Arkadev Chattopadhyay]


  12. Sunil Chandran (IISc)
    Representing a cubic graph as the intersection graph of axis-parallel boxes in three dimensions

    Abstract: We show that every graph of maximum degree 3 can be represented as the intersection graph of axis parallel 3-dimensional boxes. That is each vertex can be assigned a 3-dimensional box such that there is an edge between two vertices if and only if the corresponding boxes intersect. In fact, it is possible to get a touching representation, i.e. when two boxes intersect we can make sure that they just touch.
  13. Vikram Sharma (IMSc)
    Robustness in Geometric Computation [pdf]

    Abstract: In this talk, I will give an overview of the Exact Geometric Computation (EGC) approach to handle non-robustness in computational geometry. This approach emphasizes guaranteeing the correct geometry even when numerical errors occur in computation. I will discuss some fundamental problems and tools in this approach. The approach, however, has a drawback that it is computationally quite expensive. To overcome this drawback, the concept of soft zero-tests has been proposed recently. In this setting, we allow for some scope of error in our predicates. I will show some algorithms based on such predicates.
  14. Vipul Goyal (Microsoft Research, India)
    Emerging Encryption Systems for the Cloud: Survey and Challenges

    Abstract: Can we store data in the cloud in an encrypted format and still do meaningful operations on it? Ever since the emergence of cloud computing, this question has assumed central importance in the field of cryptography. The last few years have seen development of an entirely new generation of cryptographic primitives to address these new challenges. I will give an overview of what these are and focus on one particular notion called functional encryption. I will start with attribute based encryption (which is the precursor to functional encryption) and show how it can be used to enable access control on encrypted data. That is, it is possible to keep the entire database encrypted and give out keys in a nice way such that the parties can decrypt whatever they are authorized to (and nothing else). I will then discuss the notion of functional encryption which is a much more general notion and even allows one to perform arbitrary computations over the encrypted data. Finally, I will present one number-theoretic construction of attribute based encryption based on bilinear pairings and a variant of the discrete logarithmic assumption.

Mysore Park Theory Workshop 2013
Valid HTML 4.01!