这是我的代码:
l1 = nn.Conv2d(3, 2, kernel_size=3, stride=2).double() #Layer
l1wt = l1.weight.data #filter
inputs = np.random.rand(3, 3, 5, 5) #input
it = torch.from_numpy(inputs) #input tensor
output1 = l1(it) #output
output2 = torch.nn.functional.conv2d(it, l1wt, stride=2) #output
print(output1)
print(output2)
我希望output1和output2得到相同的结果,但事实并非如此。 我做错什么了吗?nn和nn.function的工作方式不同吗?
答案 0 :(得分:1)
我认为您忘记了偏见。
inp = torch.rand(3,3,5,5)
a = nn.Conv2d(3,2,3,stride=2)
a(inp)
nn.functional.conv2d(inp, a.weight.data, bias=a.bias.data)
我也一样
答案 1 :(得分:0)
正如@Coolness所提到的,在功能版本中,默认情况下该偏向处于关闭状态。
文档参考: https://pytorch.org/docs/stable/nn.html#conv2d https://pytorch.org/docs/stable/nn.functional.html#conv2d
import torch
from torch import nn
import numpy as np
# Bias Off
l1 = nn.Conv2d(3, 2, kernel_size=3, stride=1, bias=False).double() #Layer
l1wt = l1.weight.data #filter
inputs = np.random.rand(3, 3, 5, 5) #input
it = torch.from_numpy(inputs) #input tensor
it1 = it.clone()
output1 = l1(it) #output
output2 = torch.nn.functional.conv2d(it, l1wt, stride=1) #output
print(torch.equal(it, it1))
print(output1)
print(output2)