Abstract:
In this thesis, we study the following problems:
- Adversarial hypothesis testing is a model for problems where the observed data is not independent and identically distributed according to a fixed distribution. The samples could instead come from distributions arbitrarily chosen by an adversary. We show how sequential tests can obtain a strictly better performance compared to fixed length tests in this setting.
- Arbitrarily Varying Channels (AVC's) model channels which can vary with time in an arbitrary way during the transmission. We study the problem of distinguishing between two AVC's where the transmitter (i) is deterministic, (ii) may privately randomize, and (iii) shares randomness with the detector.
- In many practical hypothesis testing problems, our hypotheses might not exactly model the observed data. In such a situation, we would like our test to output the hypothesis which is closer to the true distribution of the underlying data. It turns out that this is possible only when the hypotheses are not too close. We give a lower bound on the optimal separation when the closeness is measured in terms of the Hellinger distance.
- Obtaining bounds on the expected generalization error of a machine learning algorithm is an important problem. We obtain a family of Rényi divergence-based bounds that recover some of the existing bounds as a special case. Also, for certain values of the Rényi parameter, they can be tighter than the existing bounds.