In this blog, I keep updating the new trend about GAN.
- NVIDIA introduces “Training Generative Adversarial Networks with Limited Data” [1].
The common question is “how many data enough for training GAN models?”. A thousand of them. The problem of GAN is using too little data typically leads to discriminator be overfitting, training to diverge.
In summary, they control the augmentation strength p by initializing p to zero and adjust its value once every four mini-batches based on the chosen overfitting heuristic.
In this work, they propose an adaptive discriminator augmentation mechanism in limited data regimes. Combining with their model, good results are now possible using a few thousand training images.