GANs (2): Summary of GAN theory paper (from my evernote)

Data GANs paper Problems they have Solutions Importance
2016/06 Improved Techniques for GANs
  1. Overtraining of discriminator
  2. Generator Mode collaps
  3. GD not converge
  4. Vulnerable to adversarial sample
  1. Feature matching
  2. Mini-batch discrimination
  3. Historical Averaging
  4. Label-smoothing
****
2016/06 f-GAN No convergence to saddle point due to single step update f-divergence;

Variational approach of f-divergence

**
2016/11 Unrolled GAN
  1. Mode collaps
  2. Oscillations between G and D
G’s consider future update of D ***
2017/01 Towards Principled Methods Training GANs Optimal D leads to zero gradients when Distribution in low-dimensional manifold Adding Gaussian noise **
2017/03 W-GAN Learning distribution is hard when they are actually in lower space Define new distance considering to optimize ****
2017/05 Improved W-GAN Lipschitz enforced by weight clipping. Introduced a Gradient Penalty term. ***
2017/05 On the convergence and stability of GANs GD is unstable leading to mode collapse; W-GAN and LS-GAN in domain space for regularization Penalty term for sharp gradients **
2017/11 Stabilizing Training of GANs

through Regularization

  1. density and dimensional mis-specifications
  2. high-dimension noise gives more variances in parameter estimation
  1. convolving with noise
  2. noise induced regularization
**
2017/11 Gradient Descent GAN optimization is locally unstable       GAN optimization are not a convex-concave game

thus having slow convergence and  mode collapse problem
  1. Even GAN optimization are not a convex-concave game but are locally asymptotically stable under proper condition
  2. Additional regularization term for gradient descent
***
2017/12 The Numerics of GANs Gradients for two update problem fail due to eigenvalues have zero real part or large value of imaginary part Consensus Optimization: Alternative method for finding new NASH Equailibrium

Introduce norm of gradients

**
2018/05 Which Training Methods for GANs do actually Converge? Unregularized GAN training is not always convergent.
  1. GAN training with instance noise or zero centered gradient penalties converges.
  2. Propose simplified gradient penalties and prove local convergence dynamics
****
2018/05 Improved Training of Generative Adversarial Networks using Representative Features Cannot have both image diversity and visual quality for original GANs Pre-trained Autoencoder for feature extration from whole dataset, keeping whole distribution learned; combined with GANs **
2018/05 On the Limitations of First-Order Approximation in GAN Dynamics Learning dynamics are not yet well understood
  1. Study GAN dynamics of vanishing gradients, mode collapse, and diverging or oscillatory behaviour
  2. An optimal discriminator provably converges, while first order approximations of D steps lead to unstable GAN dynamics and mode collapse
***
Advertisements

发表评论

Fill in your details below or click an icon to log in:

WordPress.com 徽标

You are commenting using your WordPress.com account. Log Out /  更改 )

Google photo

You are commenting using your Google account. Log Out /  更改 )

Twitter picture

You are commenting using your Twitter account. Log Out /  更改 )

Facebook photo

You are commenting using your Facebook account. Log Out /  更改 )

Connecting to %s

Create a website or blog at WordPress.com

向上 ↑

%d 博主赞过: