Questions tagged [wasserstein-gan]

For questions related to the Wasserstein GAN, introduced in "Wasserstein Generative Adversarial Networks" (2017, PMLR) by Martin Arjovsky et al.

16 questions
5
votes
0 answers

Wasserstein GAN: Implemention of Critic Loss Correct?

The WGAN paper concretely proposes Algorithm 1 (cf. page 8). Now, they also state what their loss for the critic and the generator is. When implementing the critic loss (so lines 5 and 6 of Algorithm 1), they maximize the parameters $w$ (instead of…
3
votes
1 answer

What is the reason for mode collapse in GAN as opposed to WGAN?

In this article I am reading: $D_{KL}$ gives us inifity when two distributions are disjoint. The value of $D_{JS}$ has sudden jump, not differentiable at $\theta=0$. Only Wasserstein metric provides a smooth measure, which is super helpful for a…
2
votes
0 answers

GAN : Why does a perfect discriminator mean no gradient for the generator?

In the training of a Generative Adversarial Networks (GAN) system, a perfect discriminator (D) is one which outputs 1 ("true image") for all images of the training dataset and 0 ("false image") for all images created by the generator (G). I've read…
2
votes
1 answer

What is being optimized with WGAN loss? Is the generator maximizing or minimizing the critic value?

I am kind of new to the field of GANs and decided to develop a WGAN. All of the information online seems to be kind of contradicting itself. The more I read, the more I become confused, so I'm hoping y'all can clarify my misunderstanding with WGAN…
2
votes
1 answer

Why do we use a linear interpolation of fake and real data to penalize the gradient of discriminator in WGAN-GP

I'm trying to better frame/summarize the formulations and motivations behind Wasserstein GAN with gradient penalty, based on my understanding. For the basic GAN we are trying to optimize the following quantity: $$\min_\theta \max_\phi \mathbb{E}_{x…
2
votes
1 answer

Classifying generated samples with Wasserstein-GAN as real or fake

I'm quite new to GANs and I am trying to use a Wasserstein GAN as an augmentation technique. I found this article https://www.sciencedirect.com/science/article/pii/S2095809918301127, and would like to replicate their method of evaluating the GAN.…
2
votes
1 answer

Aren't scores in the Wasserstein GAN probabilities?

I am quite new to GAN and I am reading about WGAN vs DCGAN. Relating to the Wasserstein GAN (WGAN), I read here Instead of using a discriminator to classify or predict the probability of generated images as being real or fake, the WGAN changes or…
1
vote
0 answers

How can I estimate the minimum number of training samples needed to get interesting results with WGAN?

Let's say we have a WGAN where the generator and critic have 8 layers and 5 million parameters each. I know that the greater the number of training samples the better, but is there a way to know the minimum number of training examples needed? Does…
1
vote
0 answers

WGAN-GP Loss formalization

I have to write the formalization of the loss function of my network, built following the WGAN-GP model. The discriminator takes 3 consecutive images as input (such as 3 consecutive frames of a video) and must evaluate if the intermediate image is a…
1
vote
0 answers

Under what conditions can one find the optimal critic in WGAN?

The Kantorovich-Rubinstein duality for the optimal transport problem implies that the Wasserstein distance between two distributions $\mu_1$ and $\mu_2$ can be computed as (equation 2 in section 3 in the WGAN paper) $$W(\mu_1,\mu_2)=\underset{f\in…
1
vote
0 answers

Wasserstein GAN with non-negative weights in the critic

I want to train a WGAN where the convolution layers in the critic are only allowed to have non-negative weights (for a technical reason). The biases, nonetheless, can take both +/- values. There is no constraint on the generator weights. I did a toy…
0
votes
0 answers

Why would one still use a traditional GAN architecture or WGAN architecture instead of a WGAN-GP architecture?

I've been diving into the literature of GANs, and quite early on, I was pretty convinced that WGAN-GPs were the way to go. The WGAN-GP architecture is, as far as I know, theoretically and empirically superior to both the traditional GAN architecture…
0
votes
2 answers

Is Relativistic GAN better than WGAN-GP?

I am currently reading the ESRGAN paper and I noticed that they have used Relativistic GAN for training discriminator. So, is it because Relativistic GAN leads to better results than WGAN-GP?
0
votes
0 answers

Possible improvements to WGAN-GP output images

I am mapping rather complex data into what essentially amounts to a greyscale image to take better advantage of GANs for generative means. Here is an example of some real data: All real data is of the same shape (108 across x 12 high), with an…
0
votes
1 answer

How to calculate the gradient penalty proposed in "Improved Training of Wasserstein GANs"?

The research paper titled Improved Training of Wasserstein GANs proposed a gradient penalty in order to avoid undesired behavior due to weight clipping of the discriminator. We now propose an alternative way to enforce the Lipschitz constraint. A…
hanugm
  • 3,571
  • 3
  • 18
  • 50
1
2