0

I'm constructing a neural network where the weights of my first hidden layer (connected to the input) are all 1 (identity matrix), but the biases are variable.

Is there a way to "freeze" any updates/training to the weights in a specific layer, but continue to allow the biases in that specific layer to be updated?

Nova
  • 133
  • 4
  • Maybe this will help: https://discuss.pytorch.org/t/how-the-pytorch-freeze-network-in-some-layers-only-the-rest-of-the-training/7088 – GKozinski Aug 19 '22 at 20:08

1 Answers1

2

You can do something link this

model.linear1.weight.requires_grad = False
model.linear1.bias.requires_grad = False

for the pytorch model with linear1 defined as:

self.linear1 = nn.Linear(5, 5)

as in this code snippet

Robert
  • 51
  • 5