A new weight normalizing method to enhance retraining of Generative Adversarial Networks

Authors

  • Sarbaree Mishra Program Manager at Molina Healthcare Inc., USA Author

Keywords:

Weight Normalization, Generative Adversarial Networks, GAN Training, Convergence

Abstract

Generative Adversarial Networks have been evolved as a novel paradigm for generating realistic data in several disciplines. But their training is even less under control because of mode breakdown and instability. This paper offers a novel weight normalizing approach meant to enhance convergence rates and general model performance, hence improving the GAN training environment. Comprehensive testing shows that our weight normalizing approach greatly reduces variance in generated samples, hence improving output integrity and a more consistent training process. Emphasizing its effectiveness in addressing common difficulties associated with GAN training, we provide a comprehensive analysis of the impacts of weight normalizing on both the generator and discriminator networks. Our findings show that using this novel approach enhances the quality of generated samples and speeds the training process, hence easing the deployment of GANs in useful applications for practitioners. This work presents a possible path for further generative modeling research and improves the accuracy of GAN structures and training methods. We want to inspire further development in the field by presenting a fresh perspective on weight normalizing, hence broadening the applications of GANs in many spheres.

References

1. Salimans, T., & Kingma, D. P. (2016). Weight normalization: A simple reparameterization to accelerate training of deep neural networks. Advances in neural information processing systems, 29.

2. Roth, K., Lucchi, A., Nowozin, S., & Hofmann, T. (2017). Stabilizing training of

generative adversarial networks through regularization. Advances in neural information processing systems, 30.

3. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training gans. Advances in neural information

processing systems, 29.

4. Kadurin, A., Nikolenko, S., Khrabrov, K., Aliper, A., & Zhavoronkov, A. (2017).

druGAN: an advanced generative adversarial autoencoder model for de novo generation of new molecules with desired molecular properties in silico. Molecular pharmaceutics, 14(9), 3098-3104.

5. Ba, J. L. (2016). Layer normalization. arXiv preprint arXiv:1607.06450.

6. Goodfellow, I. (2016). Nips 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160.

7. Hayes, J., Melis, L., Danezis, G., & De Cristofaro, E. (2017). Logan: Evaluating information leakage of generative models using generative adversarial networks. arXiv preprint arXiv:1705.07663, 18.

8. Brock, A., Lim, T., Ritchie, J. M., & Weston, N. (2016). Neural photo editing with introspective adversarial networks. arXiv preprint arXiv:1609.07093.

9. Yang, G., Yu, S., Dong, H., Slabaugh, G., Dragotti, P. L., Ye, X., ... & Firmin, D.

(2017). DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction. IEEE transactions on medical imaging, 37(6), 1310-1321.

10. Li, C., & Wand, M. (2016). Precomputed real-time texture synthesis with markovian generative adversarial networks. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III 14 (pp. 702-716). Springer International Publishing.

11. Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., & Krishnan, D. (2017). Unsupervised pixel-level domain adaptation with generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3722-3731).

12. Jetchev, N., Bergmann, U., & Vollgraf, R. (2016). Texture synthesis with spatial

generative adversarial networks. arXiv preprint arXiv:1611.08207.

13. Mahapatra, D., & Bozorgtabar, B. (2017). Retinal vasculature segmentation using local saliency maps and generative adversarial networks for image super resolution. arXiv preprint arXiv:1710.04783.

14. Mun, S., Park, S., Han, D. K., & Ko, H. (2017, September). Generative Adversarial Network Based Acoustic Scene Training Set Augmentation and Selection Using SVM Hyper-Plane. In DCASE (pp. 93-102).

15. Wang, D., & Liu, Q. (2016). Learning to draw samples: With application to amortized mle for generative adversarial learning. arXiv preprint

arXiv:1611.01722.

16. Gade, K. R. (2018). Real-Time Analytics: Challenges and Opportunities. Innovative Computer Sciences Journal, 4(1).

17. Komandla, V. Transforming Financial Interactions: Best Practices for Mobile Banking App Design and Functionality to Boost User Engagement and Satisfaction.

18. Gade, K. R. (2017). Integrations: ETL/ELT, Data Integration Challenges, Integration Patterns. Innovative Computer Sciences Journal, 3(1).

19. Gade, K. R. (2017). Migrations: Challenges and Best Practices for Migrating Legacy Systems to Cloud-Based Platforms. Innovative Computer Sciences Journal, 3(1).

Published

25-02-2019

How to Cite

[1]
Sarbaree Mishra, “A new weight normalizing method to enhance retraining of Generative Adversarial Networks”, Distrib. Learn. Broad Appl. Sci. Res., vol. 5, pp. 1–20, Feb. 2019, Accessed: Mar. 14, 2025. [Online]. Available: https://dlbasr.org/index.php/publication/article/view/67