Motivated by the increasing adoption of models which facilitate greater automation in risk management and decision-making, this talk presents a novel Importance Sampling (IS) scheme for estimating distribution tails for a rich class of objectives modelled with tools such as mixed integer linear programs, deep neural networks, etc. A key challenge with the conventional efficient sampling approaches in these settings is the need to intricately tailor the samplers based on the underlying probability distribution and the objective. This challenge is overcome in the proposed black-box scheme by automating the selection of an effective IS density with a transformation that implicitly learns and replicates the concentration properties observed in less rare samples. Despite its simple and scalable implementation, this self structuring IS scheme achieves asymptotically optimal variance reduction across a spectrum of multivariate distributions involving light as well as heavy tails.
This approach is guided by a large deviations principle that brings out the phenomenon of self-similarity of optimal IS distributions in considerable generality. In addition to helping certify variance reduction, the large deviations principle serves as a tool for readily yielding new tail risk asymptotics and algorithms in settings such as distribution networks.