我正在尝试在conv1d中创建自定义权重,如下所示:
import torch
from torch import nn
conv = nn.Conv1d(1,1,kernel_size=2)
K = torch.Tensor([[[0.5, 0.5]]])
with torch.no_grad():
conv.weight = K
但是我得到了错误
"File “D:\ProgramData\Miniconda3\envs\pytorchcuda102\lib\site-packages\torch\nn\modules\module.py”, line 611, in setattr
raise TypeError(“cannot assign ‘{}’ as parameter ‘{}’ "
TypeError: cannot assign ‘torch.FloatTensor’ as parameter ‘weight’ (torch.nn.Parameter or None expected)”
我在做什么错了?
答案 0 :(得分:0)
你很近。请注意,您不需要调用“ with torch.no_grad()”,因为在权重分配过程中不会计算梯度。 您需要做的就是将其删除,然后调用“ conv.weight.data”而不是“ conv.weight”,以便您可以访问基础参数值。 请参见下面的固定代码:
import torch
from torch import nn
conv = nn.Conv1d(1,1,kernel_size=2)
K = torch.Tensor([[[0.5, 0.5]]])
conv.weight.data = K
答案 1 :(得分:0)
根据讨论here,将代码更新为包含torch.nn.Parameter()
,这基本上使得权重可以作为优化器中的参数来识别。
import torch
from torch import nn
conv = nn.Conv1d(1,1,kernel_size=2)
K = torch.tensor([[[0.5, 0.5]]]) #use one dimensional as per your conv layer
conv.weight = nn.Parameter(K) #use nn.parameters
对于更大和更复杂的模型,您可以看到这个使用pytorch state_dict()
方法的玩具示例。
import torch
import torch.nn as nn
import torchvision
net = torchvision.models.resnet18(pretrained=True)
pretrained_dict = net.state_dict()
conv_weights = pretrained_dict['conv1.weight'] #64,3,7,7
new = torch.tensor((), dtype=torch.int32)
new = new.new_ones(conv_weights.shape) #assigning all ones
pretrained_dict['conv1.weight'] = new
net.load_state_dict(pretrained_dict)
param = list(net.parameters())
print(param[0])