It is commonly known that modern neural networks become sensitive when they get exposed to adversarial examples. To get rid of this problem, people start to train neural networks by using adversarial training algorithms and adversarial examples as training data. However, although the robust training error can be equal to near zero by using some methods, but the robust generalization error remains high for all those existing adversarial training algorithms. My talk will be based on the paper "Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power " authored by Binghui Li, Jikai Jin2,Han Zhong, John E. Hopcroft, Liwei Wang3. In this paper, the authors provide a theoretical understanding of this phenomenon from the perspective of expressive power for deep neural networks.