0

In the Auto-Encoding Variational Bayes paper, the formula for KL divergence is $$ \frac{1}{2} \sum \bigl (1 + \log(σ^2) - μ^2 - σ^2 \bigr) \space\space\space\space...(10)$$
, but the equation is $$- D_{KL}(q(z|x)||p(z)) \space\space\space\space...(7)$$ with a minus sign. why is that?

  • Does this answer your question? [How is this Pytorch expression equivalent to the KL divergence?](https://ai.stackexchange.com/questions/26366/how-is-this-pytorch-expression-equivalent-to-the-kl-divergence) In particular, see my answer there ;) – nbro Jul 17 '23 at 15:22
  • https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence#Definition as you can see, there is a very simple way to get rid of the minus – Alberto Jul 17 '23 at 17:13
  • I see. I understood. thank you. – diffusion stable Jul 30 '23 at 07:20

0 Answers0