Arkadev Chattopadhyay

Tuesday, 22 Nov 2016, 16:00 to 17:00

A-201 (STCS Seminar Room)

Abstract

We develop new techniques for rounding packing integer programs using iterative randomized rounding. It is based on a novel application of multidimensional Brownian motion in $\mathbb{R}^n$. Let $\overset{\sim}{x} \in {[0,1]}^n$ be a fractional feasible solution of a packing constraint $A x \leq 1,\ \ $ $A \in {\{0,1 \}}^{m\times n}$ that maximizes a linear objective function.

Our algorithm iteratively transforms $\overset{\sim}{x}$ to $\hat{x} \in {\{ 0,1\}}^{n}$ using a random walk, such that the expected values of $\hat{x}_i$'s are consistent with the Raghavan-Thompson rounding. In addition, it gives us intermediate values $x'$ which can then be used to bias the rounding towards a superior solution. Our algorithm gradually sparsifies $A$ to $A' \in {\{0,1 \}}^{m\times n}$ where each row in $A'$ has $\leq \log n$ non-zero coefficients with $A'\cdot x' \leq O(1)$. The reduced dependencies between the constraints of the sparser system can be exploited using {\it Lovasz Local Lemma}. Using the Moser-Tardos' constructive version, $x'$ converges to $\hat{x}$ in polynomial time to a distribution over the unit hypercube ${\cal H}_n = {\{0,1 \}}^n$ such that the expected value of any linear objective function over ${\cal H}_n$ equals the value at $\overset{\sim}{x}$.

We discuss application of these techniques when $A$ is a random matrix and also for a more general situation of a $k$-column sparse matrix (joint work with Dhiraj Madan).

Our algorithm iteratively transforms $\overset{\sim}{x}$ to $\hat{x} \in {\{ 0,1\}}^{n}$ using a random walk, such that the expected values of $\hat{x}_i$'s are consistent with the Raghavan-Thompson rounding. In addition, it gives us intermediate values $x'$ which can then be used to bias the rounding towards a superior solution. Our algorithm gradually sparsifies $A$ to $A' \in {\{0,1 \}}^{m\times n}$ where each row in $A'$ has $\leq \log n$ non-zero coefficients with $A'\cdot x' \leq O(1)$. The reduced dependencies between the constraints of the sparser system can be exploited using {\it Lovasz Local Lemma}. Using the Moser-Tardos' constructive version, $x'$ converges to $\hat{x}$ in polynomial time to a distribution over the unit hypercube ${\cal H}_n = {\{0,1 \}}^n$ such that the expected value of any linear objective function over ${\cal H}_n$ equals the value at $\overset{\sim}{x}$.

We discuss application of these techniques when $A$ is a random matrix and also for a more general situation of a $k$-column sparse matrix (joint work with Dhiraj Madan).

© Copyright 2023, School of Technology and Computer Science | TIFR - All Rights Reserved | Login