User Tools

Site Tools


basic_theory_of_generative_adversarial_networks

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
basic_theory_of_generative_adversarial_networks [2018/11/15 20:59] – [Final Words] dongbinkimbasic_theory_of_generative_adversarial_networks [2018/11/15 21:01] (current) – [Final Words] dongbinkim
Line 371: Line 371:
 • (3)Activation function : Sigmoid, different input • (3)Activation function : Sigmoid, different input
 \\ \\
-The test(1) is described in Fig 8 11. The lower order variable inputs works better in GAN training. The higher one affects the unstable train results. The test(2) is described in Fig 12 14. It shows that ReLU based generator and discriminator are more unstable than Leaky-ReLU. In actual test running with input x3, the system has stopped, therefore only 3 figure can be achievedLastlytest(3) describes that sigmoid is not suitable for generator and discriminator.+You will find that the lower order variable inputs works better in GAN training. The higher one affects the unstable train results. and the other shows that ReLU based generator and discriminator are more unstable than Leaky-ReLU. In actual test running with input x3, the system has stopped. Furthermorethe results describes that sigmoid is not suitable for generator and discriminator.
  
 In this work, generative adversarial networks(GANs) educational tutorial is presented. Hardware and software environment installation is summarized. GAN code lines is given in python. Three test are implemented with different input and activation functions. The results demonstrated that Leaky- LeRU is the best activation function for GAN, however the train shows unstable result when it comes with higher order variable inputs. The expected work in the future is Deep- Convolutional Generative Adversarial Networks tutorial that solves instability from original GAN proposed by experi- ment results. [2] In this work, generative adversarial networks(GANs) educational tutorial is presented. Hardware and software environment installation is summarized. GAN code lines is given in python. Three test are implemented with different input and activation functions. The results demonstrated that Leaky- LeRU is the best activation function for GAN, however the train shows unstable result when it comes with higher order variable inputs. The expected work in the future is Deep- Convolutional Generative Adversarial Networks tutorial that solves instability from original GAN proposed by experi- ment results. [2]
basic_theory_of_generative_adversarial_networks.txt · Last modified: 2018/11/15 21:01 by dongbinkim