PyTorch freeze part of the layers

Jimmy (xiaoke) Shen
5 min readJun 17, 2020

In PyTorch we can freeze the layer by setting the requires_grad to False. The weight freeze is helpful when we want to apply a pretrained model.

Here I’d like to explore this process.

Build a toy model

import torch.nn as nn
from torch.autograd import Variable
import torch.optim as optim
class Net(nn.Module):

def __init__(self):
super().__init__()
self.fc1 = nn.Linear(2, 4)
self.fc2 = nn.Linear(4, 3)
self.out = nn.Linear(3, 1)
self.out_act = nn.Sigmoid()

def forward(self, inputs):
a1 = self.fc1(inputs)
a2 = self.fc2(a1)
a3 = self.out(a2)
y = self.out_act(a3)
return y

Explore in terminal step by step

Define the model

>>> import torch.nn as nn
>>> from torch.autograd import Variable
>>> import torch.optim as optim
>>> class Net(nn.Module):
...
... def __init__(self):
... super().__init__()
... self.fc1 = nn.Linear(2, 4)
... self.fc2 = nn.Linear(4, 3)
... self.out = nn.Linear(3, 1)
... self.out_act = nn.Sigmoid()
...
... def forward(self, inputs):
... a1 = self.fc1(inputs)
... a2 = self.fc2(a1)
... a3 = self.out(a2)
... y = self.out_act(a3)
... return y
...

Output the parameters

>>> net = Net()
>>> for name, para in…

--

--