User Tools

Site Tools


dbkim_gan
no way to compare when less than two revisions

Differences

This shows you the differences between two versions of the page.


dbkim_gan [2019/07/11 11:29] (current) – created dongbinkim
Line 1: Line 1:
 +====== Basic Theory of Generative Adversarial Networks ======
 +<!-- Replace the above line with the name of your "How To" Tutorial e.g. How to Laser cut Your Name in Wood -->
 +
 +<!-- Everywhere you see <some sentence>, replace it with the answer (be concise), and remove the < and > brackets -->
 +
 +**Author:** <DONGBIN KIM> Email: <[email protected]> <!-- replace with your email address -->
 +\\
 +**Date:** Last modified on <11/15/2018>
 +\\
 +**Keywords:** <Machine Learning>
 +\\
 +
 +<!-- Add a representative photo of your tutorial below.  Make it centered in the page -->
 +
 +{{:dbgan:1.jpg?200|}}
 +\\
 +Generative Adversarial Networks(GANs) was  proposed as a new deep-learning framework  for  estimating generative models via an adversarial  process,  in  which  they simultaneously train two models: a generative model    G that captures the data distribution, and a discriminative model D that estimates  the  probability  that  a  sample  came from the training data rather than G. The training procedure for G is to maximize the probability of D making
 +a mistake. This framework corresponds to a minimax  two-playergame [1].
 +
 +They have made an interesting example to help under- standing. The generative model can be thought of as anal- ogous to a team of counterfeiters, trying to produce fake currency and use it without detection, while the discrimi- native model is analogous to the police, trying to detect the counterfeit currency. Competition in this game drives both teams to improve their methods until the counterfeits are indistiguishable from the genuine articles.
 +The entire system can be trained with backpropagation in the case where G and D are defined by multilayer perceptrons. Their work has demonstrated the potential of the framework through qualitative and quantitative evaluation of the gener- ated samples.
 +Since its publication in 2014, GAN has been considered as the coolest idea in deep learning in the last 20 years. and there are many application for text, image, and video gener- ation with GANs. Furthermore, GANs has been advanced to make better performances such as Unrolled GAN(U-GAN), Deep-Convolutional GAN(DC-GAN) [2]-[4].
 +Especially, DC-GAN does not only address weakness of original GAN, but also  they  address  the  improved result in image processing. They are considered the real GAN theory nowadays. With improved performance,  NVIDIA  has demonstrated the image processing of fake-celebrities recently [4].
 +Thus, GAN theory has been popularized in deep- learning(DL). The demand for studying GAN has in-  creased as well. Those who has been studying on Deep- Learning(DL), it is quick and easy for them to learn and compute GANs. However, Those who has  just  started  or has no knowledge in Deep Learning(DL), it is harder for them to find where to start and what to study. Despite of information wave on the internet, the author found that the study resources of GAN are hardly found due to incorrect information and code packages.
 +Therefore in this paper, the author presents an educational tutorial for original Generative Adversarial Networks.
 +
 +\\
 +===== Motivation and Audience =====
 +
 +This tutorial's motivation is to learn basic GAN theory via Python. Readers of this tutorial assumes the reader has the following background and interests: 
 +
 +<fc blue>  
 +  * Know how to python coding 
 +\\
 +  * Perhaps also know statistics
 +\\
 +  * Perhaps additional background needed may include mathmatics
 +\\
 +  * This tutorial may also attract readers who wants to look at Generative Adversarial Networks
 +</fc>
 +\\
 +The rest of this tutorial is presented as follows:
 +  * Part List and Sources
 +  * Installation Enviornment.
 +  * Python Coding
 +  * Final Words
 +
 +==== Parts List and Sources ====
 +
 +-Computer or Laptop
 +-Ubuntu 16.04
 +-Python 3.5
 +-Anaconda Package
 +-PyTorch(Non CUDA installed)
 +-Tensorflow
 +
 +==== Construction ====
 +To learn Generative Adversarial Networks(GANs), there are requirements in hardware and software. The hardware  highly affects on the training times and skills. The lower hardware environment, the longer the training times. How- ever the hardware requirements are not strict due to fast development in technologies. The author’s computer is 8 years old LG laptop, but there was no issue. Therefore this section focuses on software environment installation. The followings are required to be installed for running GANs.\\
 +• Linux Ubuntu 16.04
 +• Python 3.5 version
 +• Anaconda installer
 +• Pytorch
 +• Tensorflow
 +\\
 +First of all, Linux 16.04 should be installed. Anyone can run on the Windows too, but Linux is highly recommended due to its compatibleness to other required software. From now on, the subsection below will describe the installation process of each software [5]
 +
 +\\
 +\\
 +**Step 1 - Python**
 +\\
 +\\
 +Python is one of powerful computer language for deep- learning. All GANs are run by python package. For this GAN tutorial, we are using python version 3.5. Therefore it is very important to know python programming. Tutorials are on https://www.tutorialspoint.com/python/. Luckily, Python version 3.5 is coming with Linux Ubuntu 16.04, to check   up the python version 3.5, open the terminal and type as following.
 +\\
 +•python3 –version
 +\\
 +•Python 3.5.X (Results)
 +\\
 +Then the python 3.5 version is confirmed above
 +\\
 +\\
 +**Step 2 - Additional Python Package**
 +\\
 +\\
 +Before the big package installation, there are python- packages required to be installed. See the below
 +\\
 +• Numpy : the fundamental package for scientific comput- ing with Python, it has a powerful N-dimensional array object, sophisticated (broadcasting) functions, tools for integrating C/C++ and Fortran code, and useful linear algebra, fourier transform, random number capabili- tyies [6]
 +\\
 +• Numpy install command : $pip install numpy
 +\\
 +\\
 +**Step 3 - Anaconda**
 +\\
 +\\
 +{{:dbgan:3.jpg?200|}}
 +Anaconda is considered as one of the most popular python data science platform [7]. It helps install any python or PyTorch package for deep-learning organized. It accelerates streamline the data science workflows from data ingest through deployment. It also connects leverage and integrate all data sources to extract the most value from data. Thus, it’s very important to install the anaconda program. Check the link as following.
 +\\
 +• http://docs.anaconda.com/anaconda/install/linux/
 +\\
 +When anacanda installation is finished, anaconda is automatically installed for python 3.7 version environment. As python 3.5 is compatible with all the other software package, it’s significant to change anaconda-python version from 3.7 to 3.5. To switch the python version to 3.5, please type the following in the terminal
 +\\
 +• $conda install python=3.5
 +\\
 +Then anaconda command ’conda’ will switch the python version to 3.5. When the switch is done, move over to PyTorch installation.
 +\\
 +\\
 +
 +**Step 4 - PyTorch**
 +\\
 +\\
 +{{:dbgan:4.jpg?200|}}{{:dbgan:5.jpg?200|}}  {{:dbgan:6.jpg?200|}}
 +PyTorch is an open source deep learning platform that provides a seamless path from research prototyping to pro- duction deployment [8]. Every PyTorch package  is capable of being computed in Python language. When it   is installed, the user can import the one of the package via Python. Tools and library information is given its website ”pytorch.org”. For installation, go to pytorch.org website.
 +
 +In fig, the preference is suggested on PyTorch website. As we choose linux, Python 3.5, and Conda. Please select fol- lowings. Lastly, mark ’None’ in CUDA selection. CUDA is another software package that uses graphic pro- cessing unit(GPU) provided by NVIDIA. It helps faster calculation speed. But, the user is not required to use CUDA at this point because it’s not necessary  to  understand for this educational tutorial and this work focuses on making this tutorial work on any computer. When PyTorch  installation is finished, test code described in  should be implemented. If randomly initialized tensor number matrix is achieved, it is confirmed.
 +\\
 +\\
 +**Step5 - Tensorflow**
 +\\
 +\\
 +{{:dbgan:7.jpg?200|}}
 +Tensorflow is an open source software library for high performance numerical computation [9]. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices. Originally developed by researchers and engineers from the Google Brain team within Googles AI organization, it comes with strong support for machine learning and deep learning and the flexible numerical computation core is used across many other scientific domains. It is usually used with PyTorch together. To install this, follow the link below
 +\\
 +• https://www.tensorflow.org/install/pip
 +\\
 +The installation will be done by pip packages from Python. There are 4 available packages, and ’tensorflow’ should be chosen. Because GPU is not used, tensorflow-gpu should not be installed. Once again, this work focus to make any laptop available for GANs. When the installation is done, move over to Python Writer installation.
 +
 +
 +==== Theory and Programming Description ====
 +
 +This section addresses the theory of original generative  adversarial networks(GANs) and python code for training.
 +\\
 +{{:dbgan:dbgan:0.png?400|}}
 +\\
 +Generative Adversarial Networks model is most straight- forward to apply when the models are both multilayer perceptrons. Input noise variables pz(z) is defined to learn the generator’s distribution pg over data x. Then, a data space
 +G(z; θg) is represented as to mapping these variables, where G is a differentiable function represented by a multilayer perceptron with parameters θg. We also define a second mul- tilayer perceptron D(x; θg) that outputs a single scalar. D(x) represents the probability that x came from the data rather than pg. We train D to maximize the probability of assigning the correct label to both training examples and samples from
 +G. We simultaneously train G to minimize: log(1-D(G(z)))
 +\\
 +{{:dbgan:dbgan:9.jpg?500|}}
 +\\
 +In other words, D and G play the following two-player minimax game with value function V (G, D) [1]. Figure describes the example of GAN framework. The framework pits two adversaries against each other in  a game. Each player is represented by a differentiable function controlled by a set of parameters. Typically these functions are implemented as deep neural networks. The game plays out in two scenarios. In one scenario, training examples x are randomly sampled from the training set and used as input for the first player, the discriminator, represented by the function
 +D. The goal of the discriminator is to output the probability that its input is real rather than fake, under the assumption that half of the inputs it is ever shown are real and half are fake. In this first scenario, the goal of the discriminator is  for D(x) to be near 1. In the second scenario, inputs z to the generator are randomly sampled from the models prior over the latent variables. The discriminator then receives input  G(z), a fake sample created by the generator. In this scenario, both players participate. The discriminator strives to make D(G(z)) approach 0 while the generative strives  to  make the same quantity approach 1. If both models have sufficient capacity, then the Nash equilibrium of this game corresponds to the G(z) being drawn from the same distribution as the training data, and D(x) = 1/2 for all x.
 +Therefore, the usual desirable equilibrium point for the above defined GANs is that the Generator should model the real data and Discriminator should output the probability of
 +0.5 as the generated data is same as the real data that is, it   is not sure if the new data coming from the generator is real or fake with equal probability.
 + 
 +
 +In this section, there are 3 parts. First of all, the simple data distribution is generated for simple GAN model. Second, generator(G) and discriminator(D) networks are written in python code. Lastly, the data distribution and the networks are used for training in an adversarial way. The objective of this implementation is to learn a new function that generates data from the same distribution as the training data. The expectation from the training is that the generator network should produce data which follows the data distribution.
 +\\
 +1) Training data generation: The dataset is generated as a quadratic function for clearness by numpy-generated random samples. The python code is below
 +\\
 +<code c>
 + import numpy as np
 +
 + import math
 + def get_y(x):
 +    return 10 + x*x
 + def sample_data(n=10000, scale=100):
 +    data = []
 +    x = scale*(np.random.random_sample((n,))-0.5)
 +    for i in range(n):
 +        yi = get_y(x[i])
 +        data.append([x[i], yi])
 +    return np.array(data)
 +</code>
 +\\
 +\\
 +The data distribution is depicted as below
 +\\
 +{{:dbgan:dbgan:10.png?200|}}
 +\\
 +2) GAN python code build: First of all, 5 python packages are imported, tensorflow, numpy, seaborn, matplotlib.pyplot and training-data python created in previous section. import
 +* means that the  python  code  import  everything  inside  the package. 
 +\\
 +<code c>
 +import tensorflow as tf
 +import numpy as np
 +from training_data import *
 +import seaborn as sb
 +import matplotlib.pyplot as plt
 +sb.set()
 +</code>
 +\\
 +\\
 +Second, noise data z is randomly computed    by a uniform distribution function from low -1 to high
 +\\
 +<code c>
 +def noisesample_Z(m, n):
 +    return np.random.uniform(-1., 1., size=[m, n])
 +</code>
 +\\
 +Then, the generator(G) and discriminator(D)  networks are implemented using tensorflow layers. The generator(G) network is implemented following function:
 +\\
 +<code c>
 +def generator(Z,hsize=[16, 16],reuse=False):
 +    with tf.variable_scope("GAN/Generator",reuse=reuse):
 +        hiddenlayer1 = tf.layers.dense(Z,hsize[0],activation=tf.nn.leaky_relu)
 +        hiddenlayer2 = tf.layers.dense(hiddenlayer1,hsize[1],activation=tf.nn.leaky_relu)
 +        out = tf.layers.dense(hiddenlayer2,2)
 +
 +    return out
 +</code>
 +\\
 +This function takes in the placeholder for random samples (Z), an array hsize for the number of units in the 2 hidden layers and a reuse variable which is used for reusing the same layers. Using these inputs it creates a fully connected neural network of 2 hidden layers with 16 nodes. The Leaky- Rectified Linear Unit(Leaky-ReLU) is used for activation function due to its stability. The output of this function is       a 2-dimensional vector which corresponds to the dimensions of the real dataset that we are trying to learn.
 +The discriminator(D) network is implemented using the following function:
 +\\
 +<code c>
 +def discriminator(X,hsize=[16, 16],reuse=False):
 +    with tf.variable_scope("GAN/Discriminator",reuse=reuse):
 +        hiddenlayer1 = tf.layers.dense(X,hsize[0],activation=tf.nn.leaky_relu)
 +        hiddenlayer2 = tf.layers.dense(hiddenlayer1,hsize[1],activation=tf.nn.leaky_relu)
 +        hiddenlayer3 = tf.layers.dense(hiddenlayer2,2)
 +        out = tf.layers.dense(hiddenlayer3,1)
 +
 +    return out, hiddenlayer3
 +
 +</code>
 +\\
 +This function takes input placeholder for the samples from the vector space of real dataset. The samples can be achieved from both real samples and fake sample by generator net- work. Similar to the Generator network above it also takes input hsize and reuse. Three hidden layers  are  used  for  the discriminator, and the first 2 layers are with 16 nodes. The size of the third hidden layer is fixed to 2 so that the transformed feature space can be visualized in a 2D plane   as explained in the later section. Lastly, the output of this function is a logit prediction for the given X and the output of the last layer which is the feature transformation learned by discriminator for X . The logit function is the inverse of the sigmoid function which is used to represent the logarithm of the odds (ratio of the probability of variable being 1 to that of it being 0).
 +Adversarial Training For the purpose of Adversarial train- ing, the following placeholders X and Z are defined for real samples and random noise samples respectively:
 +\\
 +<code c>
 +X = tf.placeholder(tf.float32,[None,2])
 +Z = tf.placeholder(tf.float32,[None,2])
 +</code>
 +\\
 +In the placeholder, tf.float32 is types of element in tensor that corresponds with placeholder class. And nonex2 is defined as sized-fixed empty array. Then, the creation of the graph for generating samples from generator network and feeding real and generated samples to the Discriminator network is found to need. This is done by using the functions and placeholders defined above
 +\\
 +<code c>
 +G_sample = generator(Z)
 +real_logits, real_rep = discriminator(X)
 +fake_logits, generated_rep = discriminator(G_sample,reuse=True)
 +</code>
 +\\
 +Using the logits for generated data and real data we define the loss functions for the Generator and Discriminator networks as follows:
 +\\
 +<code c>
 +discriminator_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=real_logits,labels=tf.ones_like(real_logits)) + tf.nn.sigmoid_cross_entropy_with_logits(logits=fake_logits,labels=tf.zeros_like(fake_logits)))
 +generator_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=fake_logits,labels=tf.ones_like(fake_logits)))
 +
 +</code>
 +\\
 +These losses are sigmoid cross entropy based losses. This is a commonly used loss function for discrete classification. It takes as input the logit given by the discriminator network and true labels for each sample. It then calculates the error for each sample. The optimized calculation is implemented by TensorFlow because it is more stable then directly taking calculating cross entropy. The  equations  defined  above  5 is used for D and G loss. For objective function for D to maximize V (D, G), D(x) should be one and D(G(z)) should be zero. Therefore using tf.reduce.mean function can define the discriminator loss. On the other hand, D(G(z)) should  be one for G to maximize V (D, G). Generator loss is also defined by tf.reduce.mean function.
 +Next, the tf.get.collection fuction is used to get data from generator and discriminator. Then, the optimizers for the two networks using the loss functions defined above and scope of the layers defined in the generator and discriminator functions. RMSProp Optimizer is used for both the networks with the learning rate as 0.001. Using the scope we fetch the weights/variables for the given network only
 +\\
 +<code c>
 +discriminator_variables = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,scope="GAN/Discriminator")
 +generator_variables = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,scope="GAN/Generator")
 +
 +disc_step = tf.train.RMSPropOptimizer(learning_rate=0.001).minimize(discriminator_loss,var_list = discriminator_variables)
 +gen_step = tf.train.RMSPropOptimizer(learning_rate=0.001).minimize(generator_loss,var_list = generator_variables)
 +
 +</code>
 +\\
 +{{:dbgan:dbgan:11.jpg?200|}}
 +Finally, the network training build is made following figure. [1] The difference is that the code used RMSProp optimizers than stochastic gradient for data update. Both the networks are trained in the required number of steps:
 +\\
 +<code c>
 +for i in range(10001):
 +    X_batch = sample_data(n=batch_size)
 +    Z_batch = noisesample_Z(batch_size, 2)
 +
 +    for _ in range(ndiscriminator_steps):
 +        _, discriminatorloss = sess.run([disc_step, discriminator_loss], feed_dict={X: X_batch, Z: Z_batch})
 +
 +    for _ in range(ngenerator_steps):
 +        _, generatorloss = sess.run([gen_step, generator_loss], feed_dict={Z: Z_batch})
 +
 +</code>
 +\\
 +The above code can be modified to include more complex training procedures such as running multiple steps of the discriminator and/or generator update, fetching the features of the real and generated samples and plotting the generated samples. Please refer to the code repository for such modifications.
 +The final code is presented below, more code lines added to get the visual image results.
 +\\
 +*Full Code line*
 +<code c>
 +import tensorflow as tf
 +import numpy as np
 +from training_data import *
 +import seaborn as sb
 +import matplotlib.pyplot as plt
 +sb.set()
 +
 +def noisesample_Z(m, n):
 +    return np.random.uniform(-1., 1., size=[m, n])
 +
 +def generator(Z,hsize=[16, 16],reuse=False):
 +    with tf.variable_scope("GAN/Generator",reuse=reuse):
 +        hiddenlayer1 = tf.layers.dense(Z,hsize[0],activation=tf.nn.leaky_relu)
 +        hiddenlayer2 = tf.layers.dense(hiddenlayer1,hsize[1],activation=tf.nn.leaky_relu)
 +        out = tf.layers.dense(hiddenlayer2,2)
 +
 +    return out
 +
 +def discriminator(X,hsize=[16, 16],reuse=False):
 +    with tf.variable_scope("GAN/Discriminator",reuse=reuse):
 +        hiddenlayer1 = tf.layers.dense(X,hsize[0],activation=tf.nn.leaky_relu)
 +        hiddenlayer2 = tf.layers.dense(hiddenlayer1,hsize[1],activation=tf.nn.leaky_relu)
 +        hiddenlayer3 = tf.layers.dense(hiddenlayer2,2)
 +        out = tf.layers.dense(hiddenlayer3,1)
 +
 +    return out, hiddenlayer3
 +
 +
 +X = tf.placeholder(tf.float32,[None,2])
 +Z = tf.placeholder(tf.float32,[None,2])
 +
 +G_sample = generator(Z)
 +real_logits, real_rep = discriminator(X)
 +fake_logits, generated_rep = discriminator(G_sample,reuse=True)
 +
 +discriminator_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=real_logits,labels=tf.ones_like(real_logits)) + tf.nn.sigmoid_cross_entropy_with_logits(logits=fake_logits,labels=tf.zeros_like(fake_logits)))
 +generator_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=fake_logits,labels=tf.ones_like(fake_logits)))
 +
 +discriminator_variables = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,scope="GAN/Discriminator")
 +generator_variables = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,scope="GAN/Generator")
 +
 +disc_step = tf.train.RMSPropOptimizer(learning_rate=0.001).minimize(discriminator_loss,var_list = discriminator_variables)
 +gen_step = tf.train.RMSPropOptimizer(learning_rate=0.001).minimize(generator_loss,var_list = generator_variables)
 +
 +
 +##To start the training, session is defined below. The batch ##size is that the number of data processed together.
 +
 +sess = tf.Session()
 +tf.global_variables_initializer().run(session=sess)
 +
 +batch_size = 256
 +ndiscriminator_steps = 10
 +ngenerator_steps = 10
 +
 +x_plot = sample_data(n=batch_size)
 +
 +f = open('loss_logs.csv','w')
 +f.write('Iteration,Discriminator Loss,Generator Loss\n')
 +
 +for i in range(10001):
 +    X_batch = sample_data(n=batch_size)
 +    Z_batch = noisesample_Z(batch_size, 2)
 +
 +    for _ in range(ndiscriminator_steps):
 +        _, discriminatorloss = sess.run([disc_step, discriminator_loss], feed_dict={X: X_batch, Z: Z_batch})
 +
 +    for _ in range(ngenerator_steps):
 +        _, generatorloss = sess.run([gen_step, generator_loss], feed_dict={Z: Z_batch})
 +
 +##The error and figure are saved by following codes
 +
 +    print ("Iterations: %d\t Discriminator loss: %.4f\t Generator loss: %.4f" %(i,discriminatorloss,generatorloss))
 +    if i%10 == 0:
 +        f.write("%d,%f,%f\n"%(i,discriminatorloss,generatorloss))
 +
 +    if i%1000 == 0:
 +        plt.figure()
 +        g_plot = sess.run(G_sample, feed_dict={Z: Z_batch})
 +        xax = plt.scatter(x_plot[:,0], x_plot[:,1])
 +        gax = plt.scatter(g_plot[:,0],g_plot[:,1])
 +
 +        plt.legend((xax,gax), ("Real Data","Generated Data"))
 +        plt.title('Samples at Iteration %d'%i)
 +        plt.tight_layout()
 +        plt.savefig('../plots/iterations/iteration_%d.png'%i)
 +        plt.close()
 +
 +f.close()
 +
 +</code>
 +
 +==== Final Words ====
 +The python GAN code has been tested with several changes. First of all, the generator and discriminator network are tested with different activation function. Second, the input in train-data python code is modified to give different order variable system input. All test is described in following.
 +\\
 +• (1)Activation function : Leaky-Rectified Linear Unit(Leaky-ReLU), different input
 +\\
 +• (2)Activation function : Rectified Linear Unit(ReLU), different input
 +\\
 +• (3)Activation function : Sigmoid, different input
 +\\
 +You will find that the lower order variable inputs works better in GAN training. The higher one affects the unstable train results. and the other shows that ReLU based generator and discriminator are more unstable than Leaky-ReLU. In actual test running with input x3, the system has stopped. Furthermore, the results describes that sigmoid is not suitable for generator and discriminator.
 +
 +In this work, generative adversarial networks(GANs) educational tutorial is presented. Hardware and software environment installation is summarized. GAN code lines is given in python. Three test are implemented with different input and activation functions. The results demonstrated that Leaky- LeRU is the best activation function for GAN, however the train shows unstable result when it comes with higher order variable inputs. The expected work in the future is Deep- Convolutional Generative Adversarial Networks tutorial that solves instability from original GAN proposed by experi- ment results. [2]
 +
 +
 +
 +====Reference====
 +
 +[1] I. J. Goodfellow, et. al, ”Generative Adversarial Networks”, In Neural Information Processing System, 2014
 +\\
 +\\
 +[2] Unsupervised Representation Learning with Deep-Convolutional Gen- erative Adversarial Networks
 +\\
 +[3] L. Metz, et. al, ”Unrolled GENERATIVE ADVERSARIAL NET-
 +WORKS”, International Conference on Learning Representation, 2017
 +\\
 +[4] D. J. Im, et. al, ”Generating Images with Recurrent Adversarial Networks”, in arXiv archive cs-LG, 2016
 +\\
 +[5] T. Karras, et. al. ”Progressive Growing of GANs for Improved Quality, Stability, and Variation”, in International Conference on Learning Representation, 2018
 +\\
 +[6] Linux Ubuntu, available : https://www.ubuntu.com/
 +\\
 +[7] Numpy, available : http://www.numpy.org/
 +\\
 +[8] Anaconda, available : https://www.anaconda.com/
 +\\
 +[9] PyTorch, available : https://pytorch.org/
 +\\
 +[10] Tensorflow, available : https://www.tensorflow.org/?hl=en
 +\\
 +[11] I. Goodfellow, ”Generative Adversarial Networks(GANs)”, In Neural Information Processing System Tutorial, 2016
 +
 +
 +
 +
  
dbkim_gan.txt · Last modified: 2019/07/11 11:29 by dongbinkim