I am new to Deep Learning.
Suppose that we have a neural network with one input layer, one output layer, and one hidden layer. Let's refer to the weights from input to hidden as $W$ and the weights from hidden to output as $V$. Suppose that we have initialized $W$ and $V$, and ran them through the neural network via the forward algorithm/pass. Suppose that we have updated $V$ via backpropagation.
When estimating the ideal weights for $W$, do we keep the weights $V$ constant when updating $W$ via gradient descent given we already calculated $V$, or do we allow $V$ to update along with $W$?
So, in the code, which I am trying to do from scratch, do we include $V$ in the for loop that will be used for gradient descent to find $W$? In other words, do we simply use the same $V$ for every iteration of gradient descent?