Questions tagged [autoencoders]

For questions about autoencoders, a type of unsupervised artificial network for learning efficient data codings.

Autoencoder (wikipedia)

Autoencoders (Stanford)

Stacked Autoencoders (Standord)

138 questions
12
votes
4 answers

What are the purposes of autoencoders?

Autoencoders are neural networks that learn a compressed representation of the input in order to later reconstruct it, so they can be used for dimensionality reduction. They are composed of an encoder and a decoder (which can be separate neural…
10
votes
1 answer

Loss jumps abruptly when I decay the learning rate with Adam optimizer in PyTorch

I'm training an auto-encoder network with Adam optimizer (with amsgrad=True) and MSE loss for Single channel Audio Source Separation task. Whenever I decay the learning rate by a factor, the network loss jumps abruptly and then decreases until the…
10
votes
3 answers

What is the difference between encoders and auto-encoders?

How are the layers in a encoder connected across the network for normal encoders and auto-encoders? In general, what is the difference between encoders and auto-encoders?
m2rik
  • 333
  • 1
  • 9
10
votes
2 answers

Can autoencoders be used for supervised learning?

Can autoencoders be used for supervised learning without adding an output layer? Can we simply feed it with a concatenated input-output vector for training, and reconstruct the output part from the input part when doing inference? The output part…
rcpinto
  • 2,089
  • 1
  • 16
  • 31
9
votes
3 answers

Why is the variational auto-encoder's output blurred, while GANs output is crisp and has sharp edges?

I observed in several papers that the variational autoencoder's output is blurred, while GANs output is crisp and has sharp edges. Can someone please give some intuition why that is the case? I did think a lot but couldn't find any logic.
9
votes
2 answers

Is plain autoencoder a generative model?

I am wondering how a plain auto encoder is a generative model though its version might be but how can a plain auto encoder can be generative. I know that Vaes which is a version of the autoencoder is generative as it generates distribution for…
Nervous Hero
  • 145
  • 4
8
votes
1 answer

Does it make sense to use batch normalization in deep (stacked) or sparse auto-encoders?

Does it make sense to use batch normalization in deep (stacked) or sparse auto-encoders? I cannot find any resources for that. Is it safe to assume that, since it works for other DNNs, it will also make sense to use it and will offer benefits on…
Glrs
  • 231
  • 3
  • 8
8
votes
1 answer

Are there transformer-based architectures that can produce fixed-length vector encodings given arbitrary-length text documents?

BERT encodes a piece of text such that each token (usually words) in the input text map to a vector in the encoding of the text. However, this makes the length of the encoding vary as a function of the input length of the text, which makes it more…
7
votes
1 answer

Why doesn't VAE suffer mode collapse?

Mode collapse is a common problem faced by GANs. I am curious why doesn't VAE suffer mode collapse?
6
votes
1 answer
6
votes
2 answers

Why don't we use auto-encoders instead of GANs?

I have watched Stanford's lectures about artificial intelligence, I currently have one question: why don't we use autoencoders instead of GANs? Basically, what GAN does is it receives a random vector and generates a new sample from it. So, if we…
6
votes
4 answers

Is it possible for a neural network to be used to compress data?

When training a neural network, we often run into the issue of overfitting. However, is it possible to put overfitting to use? Basically, my idea is, instead of storing a large dataset in a database, you can just train a neural network on the entire…
5
votes
1 answer

How can genetic programming be used in the context of auto-encoders?

I am trying to understand how genetic programming can be used in the context of auto-encoders. Currently, I am going through 2 papers Training Feedforward Neural Networks Using Genetic Algorithms (a classific one) Training Deep Autoencoder via…
5
votes
1 answer

How to add a dense layer after a 2d convolutional layer in a convolutional autoencoder?

I am trying to implement a convolutional autoencoder with a dense layer at the bottleneck to do some dimensional reduction. I have seen two approaches for this, which aren't particularly scalable. The first was to introduce 2 dense layers (one at…
5
votes
1 answer

Autoencoder produces repeated artifacts after convergence

As experiment, I have tried using an autoencoder to encode height data from the alps, however the decoded image is very pixellated after training for several hours as show in the image below. This repeating patter is larger than the final kernel…
1
2 3
9 10