Let's assume the probability distributions are Gaussian (or normal) distributions. In other words, in the Bayes' rule
\begin{align}
p(z|x)=\frac{p(x|z)p(z)}{p(x)}
\tag{1}\label{1}
\end{align}
The posterior $p(z|x)$, the likelihood $p(x|z)$, the prior $p(z)$ and the evidence (or marginal) $p(x)$ are Gaussian distributions. You can assume this because Gaussian distributions are closed under conditioning and marginalization.
For simplicity, let's further assume that they are univariate Gaussian distributions. Given that the Gaussian distribution is a continuous probability distribution, it has an associated probability density function (rather than a probability mass function, which is associated with discrete probability distributions, such as the Bernoulli distribution). The probability density function of the Gaussian distribution is
\begin{align}
f(x \mid \mu, \sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2} } e^{ -\frac{(x-\mu)^2}{2\sigma^2} } \tag{2}\label{2}
\end{align}
where $\mu$ and $\sigma^2$ are respectively the mean and variance of the Gaussian distribution and $x$ is a variable (similarly to a variable $x$ in any mathematical function $f(x)$). So, given a concrete value for $x$, for example, $x=1$, then $f(x=1, \mu, \sigma^2)$ is a so-called density value (rather than a probability, which a probability mass function returns, given an input). For example, let's assume that the mean $\mu=0$ and the variance $\sigma^2 = 1$, then, for $x=1$, the density will be
$$
f(1 \mid 0, 1) = \frac{1}{\sqrt{2\pi} } e^{ -\frac{1}{2} }
$$
So, to obtain the concrete density value, I've just replaced the concrete values of $x$, $\mu$ and $\sigma^2$ in equation \ref{2}.
To calculate the posterior $p(z|x)$ in equation \ref{1}, you just need to replace the likelihood $p(x|z)$, the prior $p(z)$ and the evidence $p(x)$ with the Gaussian probability density shown in equation \ref{2}, so you will have
\begin{align}
p(z|x)=\frac{f_{X\mid Z}(x \mid \mu_{X\mid Z}, \sigma^2_{X\mid Z}, z) f_{Z}(z \mid \mu_{Z}, \sigma^2_{Z})}{f_{X}(x \mid \mu_{X}, \sigma^2_{X})}
\tag{3}\label{3}
\end{align}
I've explicitly added a subscript to the means and variances of each probability density, given that, for example, the mean of the probability density $f_{X\mid Z}$ might be different than the mean of the probability density $f_{Z}$ or $f_{X}$, etc. So, to get the actual density value (a real number) that represents $p(z|x)$, you just need to replace $f_{X\mid Z}$, $f_{Z}$ and $f_{X}$ with the definition of the Gaussian density function in \ref{2} with their actual mean and variance values. I'll let you do this, given that this is really just a matter of picking up a concrete value for the means and variances and doing some algebra.
If you assume the posterior, the likelihood, the prior or the evidence to have a different distribution, you will do the same thing, but using the probability density or mass function of your chosen distribution.
In the context of the variational auto-encoder, you will be learning the mean and variance of the distribution, so the mean and variance will not be fixed, but they will be the parameters that you want to find. However, this does not change the way you apply the Bayes' rule.