basic_theory_of_generative_adversarial_networks
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
basic_theory_of_generative_adversarial_networks [2018/11/15 20:59] – [Final Words] dongbinkim | basic_theory_of_generative_adversarial_networks [2018/11/15 21:01] (current) – [Final Words] dongbinkim | ||
---|---|---|---|
Line 364: | Line 364: | ||
==== Final Words ==== | ==== Final Words ==== | ||
The python GAN code has been tested with several changes. First of all, the generator and discriminator network are tested with different activation function. Second, the input in train-data python code is modified to give different order variable system input. All test is described in following. | The python GAN code has been tested with several changes. First of all, the generator and discriminator network are tested with different activation function. Second, the input in train-data python code is modified to give different order variable system input. All test is described in following. | ||
+ | \\ | ||
• (1)Activation function : | • (1)Activation function : | ||
+ | \\ | ||
• (2)Activation function : Rectified Linear Unit(ReLU), different input | • (2)Activation function : Rectified Linear Unit(ReLU), different input | ||
+ | \\ | ||
• (3)Activation function : Sigmoid, different input | • (3)Activation function : Sigmoid, different input | ||
- | The test(1) is described in Fig 8 11. The lower order variable inputs works better in GAN training. The higher one affects the unstable train results. | + | \\ |
+ | You will find that the lower order variable inputs works better in GAN training. The higher one affects the unstable train results. | ||
In this work, generative adversarial networks(GANs) educational tutorial is presented. Hardware and software environment installation is summarized. GAN code lines is given in python. Three test are implemented with different input and activation functions. The results demonstrated that Leaky- LeRU is the best activation function for GAN, however the train shows unstable result when it comes with higher order variable inputs. The expected work in the future is Deep- Convolutional Generative Adversarial Networks tutorial that solves instability from original GAN proposed by experi- ment results. [2] | In this work, generative adversarial networks(GANs) educational tutorial is presented. Hardware and software environment installation is summarized. GAN code lines is given in python. Three test are implemented with different input and activation functions. The results demonstrated that Leaky- LeRU is the best activation function for GAN, however the train shows unstable result when it comes with higher order variable inputs. The expected work in the future is Deep- Convolutional Generative Adversarial Networks tutorial that solves instability from original GAN proposed by experi- ment results. [2] |
basic_theory_of_generative_adversarial_networks.txt · Last modified: 2018/11/15 21:01 by dongbinkim