如何在Pytorch中的GPU上执行模型?
我加载如下模型:
def load_tocotron_2_model():
model = Tacotron2(num_chars=n_chars, r=CONFIG.r, attn_win=CONFIG.windowing, attn_norm=CONFIG.attention_norm,
prenet_type=CONFIG.prenet_type, forward_attn=CONFIG.use_forward_attn,
trans_agent=CONFIG.transition_agent)
if use_cuda:
cp = torch.load(MODEL_PATH)
else:
cp = torch.load(MODEL_PATH, map_location='cpu')
model.load_state_dict(cp['model'])
if use_cuda:
model.cuda()
model.eval() # Set eval mode
return model
如果use_cuda
为False,则模型运行正常,但如果为True,则会出现错误:
File "/data/user/external_projects/text-to-speech/mozila-tts-wavernn-cpu/TTS/models/tacotron2.py", line 50, in inference
embedded_inputs = self.embedding(text).transpose(1, 2)
File "/data/user/external_projects/text-to-speech/mozila-tts-wavernn-cpu/TTS/my_env_gpu/lib/python3.5/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/data/user/external_projects/text-to-speech/mozila-tts-wavernn-cpu/TTS/my_env_gpu/lib/python3.5/site-packages/torch/nn/modules/sparse.py", line 117, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/data/user/external_projects/text-to-speech/mozila-tts-wavernn-cpu/TTS/my_env_gpu/lib/python3.5/site-packages/torch/nn/functional.py", line 1506, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index'
self.embedding
是self.embedding = nn.Embedding(num_chars, 512)
,我需要在某个张量上指定.cuda()
还是model.cuda()
足够?