Data  GANs paper  Problems they have  Solutions  Importance 
2016/06  Improved Techniques for GANs 


**** 
2016/06  fGAN  No convergence to saddle point due to single step update  fdivergence;
Variational approach of fdivergence 
** 
2016/11  Unrolled GAN 

G’s consider future update of D  *** 
2017/01  Towards Principled Methods Training GANs  Optimal D leads to zero gradients when Distribution in lowdimensional manifold  Adding Gaussian noise  ** 
2017/03  WGAN  Learning distribution is hard when they are actually in lower space  Define new distance considering to optimize  **** 
2017/05  Improved WGAN  Lipschitz enforced by weight clipping.  Introduced a Gradient Penalty term.  *** 
2017/05  On the convergence and stability of GANs  GD is unstable leading to mode collapse; WGAN and LSGAN in domain space for regularization  Penalty term for sharp gradients  ** 
2017/11  Stabilizing Training of GANs
through Regularization 


** 
2017/11  Gradient Descent GAN optimization is locally unstable  GAN optimization are not a convexconcave game
thus having slow convergence and mode collapse problem


*** 
2017/12  The Numerics of GANs  Gradients for two update problem fail due to eigenvalues have zero real part or large value of imaginary part  Consensus Optimization: Alternative method for finding new NASH Equailibrium
Introduce norm of gradients 
** 
2018/05  Which Training Methods for GANs do actually Converge?  Unregularized GAN training is not always convergent. 

**** 
2018/05  Improved Training of Generative Adversarial Networks using Representative Features  Cannot have both image diversity and visual quality for original GANs  Pretrained Autoencoder for feature extration from whole dataset, keeping whole distribution learned; combined with GANs  ** 
2018/05  On the Limitations of FirstOrder Approximation in GAN Dynamics  Learning dynamics are not yet well understood 

*** 
Advertisements
发表评论