I was wondering what happens when you extrapolate out of the latent space distribution (noise vector) for a Generative adversarial network (GAN). Can anybody explain this?
1 Answers
In general, Generative adversarial networks are designed to generate new data that is similar to the training data, not to extrapolate to completely new data. However, in some cases, it may be possible to use a GAN to generate data that is similar to what you want, even if it is not exactly what you want.
When you extrapolate out of the latent space distribution for a GAN, you generate new data that is outside of the training data distribution. This new data is usually less realistic than the data that was generated from the training data distribution.
The results of extrapolating GAN are unpredictable and may result in artefact or unusable results as we know that the latent space distribution for a GAN is not well-defined, and therefore extrapolating from it is also not well-defined either.
Additionally, a higher dimensional latent space can allow for more variance in the generated data and make it easier to generate data that is not in the original latent space.

- 1,074
- 1
- 6
- 30