3

Is it possible to use a VAE to reconstruct an image starting from an initial image instead of using K.random_normal, as shown in the “sampling” function of this example?

I have used a sample image with the VAE encoder to get z_mean and z_logvar.

I have been given 1000 pixels in an otherwise blank image (with nothing in it).

Now, I want to reconstruct the sample image using the decoder with a given constraint that the 1000 pixels in the otherwise blank image remain the same. The remaining pixels can be reconstructed so they are as close to the initial sample image as possible. In other words, my starting point for the decoder is a blank image with some pixels that don’t change.

How can I modify the decoder to generate an image based on this constraint? Is it possible? Are there variations of VAE that might make this possible? So we can predict the latent variables by starting from an initial point(s)?

nbro
  • 39,006
  • 12
  • 98
  • 176
John Watts
  • 31
  • 2

2 Answers2

1

The thing is, the decoder samples from a latent mu and sigma, so you cant sample from a raw image directly. But if you’re trying to put a random image into the encoder of a trained VAE to match it to some sample image (via reconstruction loss), then your random input image will converge to the target sample.

This will work when the following VAE architecture constraints are satisfied:

  1. The target sample is contained in the previously used training distribution.

  2. The parameters of the VAE are frozen after training.

  3. The input image values are “backpropagate-able”. (Interpret the input image as optimizable parameters.)

Ari K
  • 111
  • 3
  • Is it possible to leave the 1000 or so input image pixels unchanged? Is there a code sample I can look at? Thanks for the response. I am happy this is at least possible. – John Watts May 04 '18 at 02:51
  • Np! With this specific architecture example its not possible to leave the input pixels unchanged, since they are the values that eventually will converge to some output target. Maybe I'm misunderstanding the question though. `main.py` in [this](https://github.com/arikanev/VAE_exps) repo, is sample code that trains a VAE on mnist, then freezes the encoder+decoder weights, then generates a random input that is updated to converge to a target from the mnist distribution. – Ari K May 04 '18 at 17:37
0

You could use VAE as previously answered though it will not work well in practice.
I think denoising auto-encoder is suitable for your problem because during training, the input is corrupted stochastically, thus it must learn to guess the distribution of the missing information (reconstruct the clean original input)
We could argue that VAE is better than DAE at modeling p(x) because of the randomness introduced at the latent space layer while DAE like algorithm keeps putting noise starting from the input layer.
suppose your data is concentrated on this 1-D curved manifold, what VAE could do is just pick some random value and output p(X|Z) which is Gaussian by the way, while DAE would learn to map a corrupted data point x˜ back to the original data point x.

enter image description here

Fadi Bakoura
  • 364
  • 2
  • 6