I'm working in a regression setting to predict a scalar value $y$ from an input $\textbf{x} \in \mathbb{R}^D$ and I'm interested in understanding whenever my model is fed with something that it is outside the (unknown) training distribution $p(\textbf{x})$. For simplicity we can assume I'm using a simple neural network $f_\theta:\mathbb{R}^K \rightarrow\mathbb{R}$ to predict a single (scalar) property value, training my model with an initial dataset $\mathcal{D} = \{(\textbf{x}_i \, , y_i)\}_{i=1}^N$, that is, my task is specifically about regression.
What I'd be interested in achieving would be that, feeding my neural net with a new input $\tilde{\textbf{x}}$ I could retrieve somehow a confidence score telling me if the new input $\tilde{\textbf{x}}$ lies outside the spectrum of observed instances in training dataset.
A way of doing that would be of course estimating the probability of training dataset $p_\theta(\textbf{x})$ and see if the new material $\tilde{\textbf{x}}$ is in a low-likelihood region of $p$. People have used such approach for images (https://arxiv.org/pdf/1912.03263.pdf) but generative models are hard to train.
Instead, I was looking at recently proposed papers using energy-scores for detecting out of distribution samples (paper1, paper2) but the examples seem to refer specifically to classification settings.
As I'm not too familiar with energy-based models, is there a way such frameworks may be applied to regression settings?