• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Crypto Currency
  • Technology
  • Contact
NEO Share

NEO Share

Sharing The Latest Tech News

  • Home
  • Artificial Intelligence
  • Machine Learning
  • Computers
  • Mobile
  • Crypto Currency

Best practices for GANs

February 18, 2021 by systems

Arnav Kartikeya

Generative Adversarial Networks are known to require good hardware and time to train. Because of this, certain optimizations and standards have come to be so as to reduce the time for training in GANs. In this blog I will list a few of these tips that help create better GANs faster. Most of these come from books and videos such as Jason Brownlee’s General Adversarial Networks with Python.

The first tip starts with the latent space. The latent space is what defines the input dimensions for the generator part of the GAN model. This is typically created by a simple matrix of random numbers with a specific dimensions. Using Gaussian distributed numbers (numbers with mean close to 0 and a standard distribution of 1) creates a uniformly distributed latent space, which is recommended by many GAN models, such as DCGAN.

The next tip is to scale the input images and the output ones to have the same pixel values. The discriminator is the in charge of determining whether or not the generator’s output is real. This is done by first giving it inputs from known real data and then later training it on the generator’s data. Having both datasets be scaled to a similar pixel range is a common best practice for GANs. This range is typically [-1, 1]

When training a GAN, a model is created by merging the discriminator and generator in one and training that through keras. The data fed into this merged model can either be a mix of real and fake data, or it can be purely real and purely fake. The latter is a better approach. Having the data seperated into fake and real and then using the train_on_batch command in keras will improve the GAN’s performance.

Lastly, the optimizer commonly used for GANs is Adam with a learning rate of 0.0002 and beta1 of 0.5. Adam is an optimizing algorithm for stochastic gradient descent, which will help the network change its weights more accurately and efficiently.

These are just some tips compiled by me over reading and creating GAN architectures. For more detail and information, books such as the previously mentioned General Adversarial Networks with Python have more resources.

Filed Under: Machine Learning

Primary Sidebar

website design carmel

Website Design in Carmel: Building an Online Presence That Works

Carmel WordPress Help

Carmel WordPress Help: Expert Support to Keep Your Website Running Smoothly

Stay Ahead: The Latest Tech News and Innovations

Cryptocurrency Market Updates: What’s Happening Now

Emerging Trends in Artificial Intelligence: What to Watch For

Footer

  • Privacy Policy
  • Terms and Conditions

Copyright © 2025 NEO Share

Terms and Conditions - Privacy Policy