Header logo is

Which Training Methods for GANs do actually Converge?

2018

Conference Paper

avg


Recent work has shown local convergence of GAN training for absolutely continuous data and generator distributions. In this paper, we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent. Furthermore, we discuss regularization strategies that were recently proposed to stabilize GAN training. Our analysis shows that GAN training with instance noise or zero-centered gradient penalties converges. On the other hand, we show that Wasserstein-GANs and WGAN-GP with a finite number of discriminator updates per generator update do not always converge to the equilibrium point. We discuss these results, leading us to a new explanation for the stability problems of GAN training. Based on our analysis, we extend our convergence results to more general GANs and prove local convergence for simplified gradient penalties even if the generator and data distributions lie on lower dimensional manifolds. We find these penalties to work well in practice and use them to learn high-resolution generative image models for a variety of datasets with little hyperparameter tuning.

Author(s): Lars Mescheder and Andreas Geiger and Sebastian Nowozin
Book Title: International Conference on Machine learning (ICML)
Year: 2018

Department(s): Autonomous Vision
Research Project(s): Convergence and Stability of GAN training
Bibtex Type: Conference Paper (conference)
Paper Type: Conference

Event Place: Stockholm

Links: code
Attachments: pdf
poster

BibTex

@conference{MeschederICML2018,
  title = {Which Training Methods for GANs do actually Converge?},
  author = {Mescheder, Lars and Geiger, Andreas and Nowozin, Sebastian},
  booktitle = {International Conference on Machine learning (ICML)},
  year = {2018}
}