I'm constructing a neural network where the weights of my first hidden layer (connected to the input) are all 1 (identity matrix), but the biases are variable.
Is there a way to "freeze" any updates/training to the weights in a specific layer, but continue to allow the biases in that specific layer to be updated?