Girish Varma

Aditya Nema

Friday, 21 Nov 2014, 14:00 to 15:30

D-405 (D-Block Seminar Room)

Abstract

Abstract: In this talk, we will go through some of the results in the paper "Communication with Imperfectly Shared Randomness " by Canonne, Guruswami, Meka and Sudan.

The problem of compression with uncertain priors is: Alice has a distribution P on a universe [N] = {1,...,N}, and a message m ∈ [N] chosen according to the distribution P. Alice is allowed to send some bits to Bob and Bob should output m and the goal is to minimize the expected number of bits that Alice sends Bob(over the random choice of m). If Bob knows the distribution P exactly then this is the classical compression problem, solved for example by Huffman coding. In most forms of natural communication (e.g., think about the next email you are about to send), Alice and Bob are not perfectly aware of the underlying context to their exchange, but have reasonably good ideas about each other. One way to model this is to say that Bob has a distribution Q that is close to the distribution P that Alice is working with, but is not identical to P. Compressing information down to its entropy in the presence of such uncertainty (i.e., P 6= Q) turns out to be possible if Alice and Bob share randomness that is independent of(P,Q,m) as shown by Juba et al. . It is natural to ask the question: can the(presumed) savings in communication be achieved in the absence of perfect sharing of randomness?

If time permits we will also look at other problems where solutions with perfect sharing of randomness can be extended to the imperfect setting.

The problem of compression with uncertain priors is: Alice has a distribution P on a universe [N] = {1,...,N}, and a message m ∈ [N] chosen according to the distribution P. Alice is allowed to send some bits to Bob and Bob should output m and the goal is to minimize the expected number of bits that Alice sends Bob(over the random choice of m). If Bob knows the distribution P exactly then this is the classical compression problem, solved for example by Huffman coding. In most forms of natural communication (e.g., think about the next email you are about to send), Alice and Bob are not perfectly aware of the underlying context to their exchange, but have reasonably good ideas about each other. One way to model this is to say that Bob has a distribution Q that is close to the distribution P that Alice is working with, but is not identical to P. Compressing information down to its entropy in the presence of such uncertainty (i.e., P 6= Q) turns out to be possible if Alice and Bob share randomness that is independent of(P,Q,m) as shown by Juba et al. . It is natural to ask the question: can the(presumed) savings in communication be achieved in the absence of perfect sharing of randomness?

If time permits we will also look at other problems where solutions with perfect sharing of randomness can be extended to the imperfect setting.

© Copyright 2023, School of Technology and Computer Science | TIFR - All Rights Reserved | Login