GANs (3): Challenges in GANs

My prepared ppt for GANs review.

Classification of Problems and Related Paper:

First of all, read GANs tutorial from NIPS2016

1. Mode Collapse

The assumption of learning the full dataset well is not achieved, and only part of the dataset has been learned. The VAE can learned representation of the whole dataset, but not be very particular for each sample, while GANs can have sharp image, but encounter mode collapses more easily.

Unrolled GANs; (2017ICLR) **

Improves Techniques for Training GANs; (2016) ***

On Distinguishability Criteria for Estimating Generative Models; (2014) **
Towards Principled Methods for training GANs; (2016) ***

Mode Regularized GAN; (2016) **

Improved Training of Generative Adversarial Networks using Representative Features; (2018ICML) **

2. Non-convergence

We think it hard to achieve Nash equilibrium. Let me give a simple example for min-max two player game solved by simultaneous gradient descent methods.

On Distinguishability Criteria for Estimating Generative Models; (2014) **

Improves Techniques for Training GANs; (2016) ***

f-GAN: Training Generative Sampler using Variational Divergence Minimization; (2016)*** 

Stabilizing Training of Generative Adversarial Networks through Regularization; (2017) ** 

The Numerics of GANS; (2017NIPS) ** 

Gradient descent GAN optimization is locally stable; (2017NIPS) ** 

Which Training Methods for GANs do actually Converge?; (2018ICML,very new) ***

Improved Training of Generative Adversarial Networks using Representative Features; (2018ICML) **

3. Vanishing Gradient

When the final discriminator has been well trained, which means D(x) = 1, when x \sim p_{data} and D(x) = 0, when x \sim p_{model}. It will lead to the loss function goes to zero and the gradients are vanishing. Thus, for this challenge, it has two dilemmas:

  • Discriminator \to bad behavior, the Generator and loss function cannot represent the data distribution p_{data}.
  • Discriminator \to good behavior, the gradient of loss function goes to zero, the learning is slow or fails.
Towards Principled Methods for training GANs; (2016) ***

Wasserstein GAN; (2017) ****

Improved Training of Wasserstein GANs; (2017NIPS) ***

4. Generalizability and Existence of Equilibrium

Generalization and Equilibrium in Generative Adversarial Nets (GANs); (2017) **

5. Evaluation methods for GANs

Improves Techniques for Training GANs; (2016) ***

Pros and Cons of GAN Evaluation Measures; (2018) *

Geometry Score: A Method For Comparing Generative Adversarial Networks
Advertisements

发表评论

Fill in your details below or click an icon to log in:

WordPress.com 徽标

You are commenting using your WordPress.com account. Log Out /  更改 )

Google photo

You are commenting using your Google account. Log Out /  更改 )

Twitter picture

You are commenting using your Twitter account. Log Out /  更改 )

Facebook photo

You are commenting using your Facebook account. Log Out /  更改 )

Connecting to %s

Create a website or blog at WordPress.com

向上 ↑

%d 博主赞过: