Mysore Park Workshops


2nd Annual Mysore Park Workshop in Theoretical Computer Science: Algorithms and Complexity (5-9 May, 2011)

Talks (with abstracts)

  1. V. Arvind (IMSC, Chennai)
    Noncommutative arithmetic circuit complexity and finite automata

    Abstract: In this talk we explain some results on noncommutative arithmetic circuits that are proved using properties of finite automata. Specifically, we define the Hadamard product of multivariate polynomials and use it to show the following results:
    1. a polynomial identity test for noncommutative algebraic branching programs over rationals which gives a tight complexity classification of the problem.
    2. if the noncommutative determinant polynomial has small noncommutative arithmetic circuits then so does the noncommutative permanent.
    (This is from joint work with Partha Mukhopadhyay, Srikanth Srinivasan, and Pushkar Joglekar).
  2. Nikhil Bansal (IBM Research, Watson Center, New York)
    Tutorial: Primal Dual Method for Online Algorithms and the k-server Problem [slides (ppt)]

    Abstract: Recently, the primal dual method has emerged as a key tool for designing competitive online algorithms. In this talk we will describe the online primal dual method, discuss how it is applicable to several seemingly unrelated problems, and also describe a recent polylogarithmic competitive algorithm for the k-server problem inspired by the primal dual method.
  3. Nikhil Bansal (IBM Research, Watson Center)
    Approximation Algorithms for Discrepancy Problems [slides (ppt)]

    Abstract: Find a minimum discrepancy coloring is a fundamental problem in combinatorics and is defined as follows: Given a collection of sets S1,...,Sm, color the elements red and blue such that each set is colored as evenly as possible. We will describe the first polynomial time algorithm that finds low discrepancy colorings in various settings such as when the set system is guaranteed to have low hereditary discrepancy. Our algorithm is based on a new rounding technique for SDPs, together with previously known non-constructive techniques. Time permitting, we will also describe some more refined applications of this approach,such as a constructive proof of Spencer's celebrated "six standard deviations suffice" result.
  4. Sunil Chandran (IISc, Bangalore)
    Boxicity and Partial Order Dimension [slides (pdf)]

    Abstract: Let G be a simple, undirected, finite graph with vertex set V(G) and edge set E(G). The boxicity of G, box(G) is the minimum integer k such that G can be represented as the intersection graph of k-dimensional boxes. For a poset P with ground set S, the dimension of the poset, dim(P) is the minimum integer t such that P can be expressed as the intersection of t total orders.

    Let G_P be the underlying comparability graph of the poset P, i.e. S is the vertex set and two vertices are adjacent if and only if they are comparable in P. It is a well-known fact that posets with the same underlying comparability graph have the same dimension. The first result of this paper links the dimension of a poset to the boxicity of its underlying comparability graph. In particular, we show that for any poset P, box(G_P)/(\chi(G_P)-1) \le dim(P) \le 2 box(G_P), where \chi(G_P)$ is the chromatic number of G_P and \chi(G_P) \ne 1. It immediately follows that if P is a height-2 poset, then box(G_P) \le dim(P) \le 2 box(G_P), since the underlying comparability graph of a height-2 poset is a bipartite graph.

    The second result of the paper relates the boxicity of a graph G with a natural partial order associated with the extended double cover of G, denoted as G_c: Note that G_c is a bipartite graph with partite sets A and B which are copies of V(G) such that corresponding to every vertex u of V(G), there are two vertices u_A in A, and u_B in B and (u_A,v_B) is an edge in G_c if and only if either u=v or u is adjacent to v in G. Let P_c be the natural height-2 poset associated with G_c by making A the set of minimal elements and B the set of maximal elements. We show that boxi(G)/2 \le dim(P_c) \le 2 box(G)+4.

  5. Manoj Gupta (IIT Delhi)
    Fully dynamic maximal matching in O(log n) update time [slides (pdf)]

    Abstract: We present an algorithm for maintaining maximal matching in a graph under addition and deletion of edges. Our data structure is randomized that takes O(log n) expected amortized time for each edge update where n is the number of vertices in the graph. While there is a trivial O(n) algorithm for edge update, the previous best known result for this problem for a graph with n vertices and m edges is O((n+ m)^0.7072) which is sub-linear only for a sparse graph. For the related problem of maximum matching, Onak and Rubinfield designed a randomized data structure that achieves O(log^2 n) amortized time for each update for maintaining a c-approximate maximum matching for some large constant c. In contrast, we can maintain a factor two approximate maximum matching in O(log n) expected time per update as a direct corollary of the maximal matching scheme. This in turn also implies a two approximate vertex cover maintenance scheme that takes O(log n) expected time per update.
  6. Ramesh Hariharan (Strand Life Sciences)
    Tutorial: Sparsifying graphs while maintaining cut sizes[slides (pdf)]

    Abstract: Benczur and Karger observed a while ago that sampling each edge with prob p and giving it weight 1/p (if chosen) can yield sparse graphs which preserve all cut sizes, provided p is chosen appropriately. This talk will describe various choices for p, in the process conveying intuition on why some values of p work while others do not.
  7. Rahul Jain (National University of Singapore)
    Tutorial: Communication complexity : Lower bound methods and the Direct Sum/Direct Product questions[slides (pptx)]

    Abstract: In this survey on communication compexity we will look at some lower bound methods : the rectangle/corruption bound, the smooth rectangle bound, the discrepancy bound, the smooth discrepancy bound and the partition bound; along with few recent applications.

    In the second part we will survey results regarding the Direct Sum and Direct Product questions in various models of communication complexity. These questions essentially ask if it is possible to compute k simultaneous instances of a function f, more efficiently than using k-times the resource required for a single instance ?

  8. Rahul Jain (National University of Singapore)
    New strong direct product results in communication complexity[slides (pptx)]

    Abstract: We show two new direct product results in two different models of communication complexity. Our first result is in the model of one-way public-coin model. Let f subset of X times Y times Z be a relation and eps be a constant. Let R^{1,pub}_eps(f) represent the communication complexity of f, with worst case error eps in this model. We show that if for computing f^k (k independent copies of f) in this model, o(k R^{1,pub}_eps(f)) communication is provided, then the success is exponentially small in k. To our knowledge this is the first time a strong direct product result holding for all relations has been shown in any model of communication complexity. We show a new tight characterization of communication complexity in this model which strengthens on the tight characterization shown in J., Klauck, Nayak, 2008. We use this new characterization to show our direct product result and this characterization may also be of independent interest.

    Our second direct product result is in the model of two-way public-coin communication complexity. We show a direct product result for all relations in this model in terms of a new complexity measure that we define. Our new measure is a generalization to non-product distributions, of the two-way product subdistribution bound of J., Klauck and Nayak, 2008. Our direct product result therefore generalizes to non-product distributions, their direct product result in terms of the two-way product subdistribution bound. As an application of our new direct product result, we reproduce (via completely different arguments) strong direct product result for the set-disjointness problem which was previously shown by Klauck, 2010. We show this by showing that our new complexity measure gives tight lower bound of Omega(n) for the set-disjointness problem on n-bit inputs. In addition we show that many previously known direct product results in this model are uniformly implied and often strengthened by our result.

    Talk based on

  9. Ravi Kannan (Microsoft Research India)
    k-means converges

    Abstract: The k-means algorithm is very widely used. Generally, proofs of convergence (to the desired result) are known only under stochastic models of data. Here, we prove exponential convergence under a deterministic assumption we call ``proximity''. Proximity holds under all known stochastic models, so this subsumes earlier results. Proximity requires that each data point is closer by a certain delta to the center of its own cluster than the center of any other cluster in the projection to the space of centers. Our proof requires a good start and we show that Linear Algebra provides us such a start.

    Joint with Amit Kumar.

  10. Neeraj Kayal (Microsoft Research India)
    Tutorial: A survey of lower bounds in arithmetic complexity

    Abstract: Arithmetic Complexity addresses the following question: given a polynomial f, can it be computed efficiently? In particular, it is conjectured that the permanent of an n -times-n matrix cannot be computed efficiently. In this talk we will take the permanent as our running example. We will see lower bounds for computing the permanent in several restricted models of arithmetic computation.
  11. Amit Kumar (IIT Delhi)
    Approximation Graphic TSP by Matchings [slides (ppt)]

    Abstract: I will present a recent result of Tobias Momke and Ola Svensson on approximating traveling salesman problem on unweighted graphs. They give a 1.46 approximation for this problem. Their framework uses matchings in a new way : earlier approximation algorithm for this problem added a matching to a spanning tree to make it Eulerian, whereas they show how one can use matchings to delete edges as well.
  12. Daniel Lokshtanov (Univ. California, San Diego)
    An Introduction to Treewidth [slides (pptx)]

    Abstract: Treewidth is a graph parameter that has played a central role in graph theory and graph algorithms over the last 30 years. In this talk I will give a brief introduction to the concept and cover some of the basic properties of bounded treewidth graphs. Topics include fast algorithms for bounded treewidth graphs and balanced separator theorems for graphs of bounded treewidth.
  13. Daniel Lokshtanov (Univ. California, San Diego)
    Treewidth as a tool for Algorithm Design [slides (pptx)]

    Abstract: An area where treewidth has turned out to be particularly useful, is in the design of algorithms for hard problems on planar graphs. We prove a couple decomposition theorems for planar graphs, and show how they can be used to design algorithms for a myriad of problems in different domains. In particular, we will use the decomposition theorems to give fast exact algorithms, parameterized algorithms and approximation schemes for a number of problems. We then show that our algorithms generalize to work not only on planar graphs, but also on more general or related graph classes.
  14. Nutan Limaye (IIT Bombay)
    Williams, NEXP is not contained in ACC0 [slides (pdf)]

    Abstract: In 2010, Ryan Williams proved an unconditional separation between NEXP and ACC0. In this talk we outline the key proof techniques involved in proving this result.
  15. C. Panduragan (ISI Chennai and IIT Madras)
    Provably secure crypto systems - A new and exciting direction in theory research

    Abstract: Crypto systems such as encryption schemes are supposed to offer confidentiality and signature schemes must be unforgeable. However, there are several such schemes designed in the past with some informal arguments on its security properties only to be found later that they have major flaws. Thus, the question of how one can formally argue on the claimed security property is of fundamental importance. For example, how one can give a mathematical guarentee on the unforgeability of a signature scheme? How can you say for sure that an encryption scheme is providing confidentiality and that no one else can obtain the hidden message? Towards this, one has to formalise the notion of security, attack models and the information available for the breaker. This talk will act as a curtain raiser and invite theory community to explore the area called provable security in cryptology. No pre requisite is needed. I will present the material ground up and pose some interesting open problems as well.
  16. Jaikumar Radhakrishnan (TIFR, Mumbai)
    Communication Complexity of gap Hamming problem[slides (pdf)]

  17. Ramprasad Saptharishi (Chennai Mathematical Institute)
    Saxena-Seshadhri, Blackbox Identity Testing for Bounded Top Fanin Depth-3 Circuits: the field doesn't matter[slides (pdf)]

    Abstract: In this talk we look at a recent result of Saxena and Seshadri (STOC 2011) that gives a polynomial sized hitting set for bounded top fan-in depth 3 circuits over any field.

    All previous attempts at black-box identity tests was via "rank bounds" (introduced by Dvir and Shpilka). These approaches essentially converted the circuit to an "isomorphic" circuit over few variables (as many as the rank). Over the field of rationals, Kayal and Saraf showed that the rank of the circuit is bounded by a function of k alone (where k is the top fan-in), using a deep theorem on geometry of points in R^n. This directly translated to a polynomial time blackbox algorithm for bounded fan-in depth 3 circuits. Saxena and Seshadri subsequently improved the dependence on k, and also showed a rank bound of k^2 log d over finite fields (where d is the degree). However this translated to a quasipolynomial time blackbox algorithm due to the dependence on d. This was unavoidable as there is a rank lower bound of k log d over finite fields (Kayal and Saxena 2007).

    In this paper Saxena and Seshadri deviate and preserve a "certificate for non-zeroness" rather than creating an isomorphic circuit. By looking carefully at the Kayal-Saxena test, they extract a "low-rank-ideal" that certifies non-zeroness of the circuit. They then show how this ideal can be preserved by similar transformations, thus yielding the result.

Organization committee