我正在尝试将输入和深度学习模型更改为flaot16,因为我使用的是T4 GPU,它们在fp16上的运行速度更快。 这是代码的一部分:我首先有了我的模型,然后制作了一些虚拟数据点,以期弄清楚首先要弄清楚的数据转换(我在整个批次中都运行了该错误,并且得到了相同的错误)。
model = CRNN().to(device)
model = model.type(torch.cuda.HalfTensor)
data_recon = torch.from_numpy(data_recon)
data_truth = torch.from_numpy(data_truth)
dummy = data_recon[0:1,:,:,:,:] # Gets just one batch
dummy = dummy.to(device)
dummy = dummy.type(torch.cuda.HalfTensor)
model(dummy)
这是我得到的错误:
> ---------------------------------------------------------------------------
RuntimeError Traceback (most recent call
> last) <ipython-input-27-1fe8ecc524aa> in <module>
> ----> 1 model(dummy)
>
> /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py
> in __call__(self, *input, **kwargs)
> 491 result = self._slow_forward(*input, **kwargs)
> 492 else:
> --> 493 result = self.forward(*input, **kwargs)
> 494 for hook in self._forward_hooks.values():
> 495 hook_result = hook(self, input, result)
>
> <ipython-input-12-06f39f9304a1> in forward(self, inputs, test)
> 57
> 58 net['t%d_x0'%(i-1)] = net['t%d_x0'%(i-1)].view(times, batch, self.filter_size, width,
> height)
> ---> 59 net['t%d_x0'%i] = self.bcrnn(inputs, net['t%d_x0'%(i-1)], test)
> 60 net['t%d_x0'%i] = net['t%d_x0'%i].view(-1, self.filter_size, width, height)
> 61
>
> /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py
> in __call__(self, *input, **kwargs)
> 491 result = self._slow_forward(*input, **kwargs)
> 492 else:
> --> 493 result = self.forward(*input, **kwargs)
> 494 for hook in self._forward_hooks.values():
> 495 hook_result = hook(self, input, result)
>
> <ipython-input-11-b687949e9ce5> in forward(self, inputs,
> input_iteration, test)
> 31 hidden = initial_hidden
> 32 for i in range(times):
> ---> 33 hidden = self.CRNN(inputs[i], input_iteration[i], hidden)
> 34 output_forward.append(hidden)
> 35 output_forward = torch.cat(output_forward)
>
> /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py
> in __call__(self, *input, **kwargs)
> 491 result = self._slow_forward(*input, **kwargs)
> 492 else:
> --> 493 result = self.forward(*input, **kwargs)
> 494 for hook in self._forward_hooks.values():
> 495 hook_result = hook(self, input, result)
>
> <ipython-input-10-15c0b221226b> in forward(self, inputs,
> hidden_iteration, hidden)
> 23 def forward(self, inputs, hidden_iteration, hidden):
> 24 in_to_hid = self.i2h(inputs)
> ---> 25 hid_to_hid = self.h2h(hidden)
> 26 ih_to_ih = self.ih2ih(hidden_iteration)
> 27
>
> /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py
> in __call__(self, *input, **kwargs)
> 491 result = self._slow_forward(*input, **kwargs)
> 492 else:
> --> 493 result = self.forward(*input, **kwargs)
> 494 for hook in self._forward_hooks.values():
> 495 hook_result = hook(self, input, result)
>
> /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py in
> forward(self, input)
> 336 _pair(0), self.dilation, self.groups)
> 337 return F.conv2d(input, self.weight, self.bias, self.stride,
> --> 338 self.padding, self.dilation, self.groups)
> 339
> 340
>
> RuntimeError: Input type (torch.cuda.FloatTensor) and weight type
> (torch.cuda.HalfTensor) should be the same
答案 0 :(得分:1)
查看您对CRNN
的实现。我的猜测是您在模型中存储了“隐藏”状态张量,但不是作为“缓冲区”而是作为常规张量存储。因此,将模型转换为float16时,隐藏状态仍为float32并导致此错误。
尝试将隐藏状态作为寄存器存储在模块中(有关更多信息,请参见register_buffer
)。
另外,您可以通过重载模型的.to()
方法来显式地将模块中的任何成员张量强制转换为float16。