Pytorch:在GPU上执行:RuntimeError:后端CUDA的预期对象,但参数#3'index'的后端CPU

时间:2019-05-16 18:07:21

标签: pytorch

如何在Pytorch中的GPU上执行模型?

我加载如下模型:

def load_tocotron_2_model():
    model = Tacotron2(num_chars=n_chars, r=CONFIG.r, attn_win=CONFIG.windowing, attn_norm=CONFIG.attention_norm,
                      prenet_type=CONFIG.prenet_type, forward_attn=CONFIG.use_forward_attn,
                      trans_agent=CONFIG.transition_agent)

    if use_cuda:
        cp = torch.load(MODEL_PATH)
    else:
        cp = torch.load(MODEL_PATH, map_location='cpu')

    model.load_state_dict(cp['model'])

    if use_cuda:
        model.cuda()

    model.eval() # Set eval mode

    return model

如果use_cuda为False,则模型运行正常,但如果为True,则会出现错误:

  File "/data/user/external_projects/text-to-speech/mozila-tts-wavernn-cpu/TTS/models/tacotron2.py", line 50, in inference
    embedded_inputs = self.embedding(text).transpose(1, 2)
  File "/data/user/external_projects/text-to-speech/mozila-tts-wavernn-cpu/TTS/my_env_gpu/lib/python3.5/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/data/user/external_projects/text-to-speech/mozila-tts-wavernn-cpu/TTS/my_env_gpu/lib/python3.5/site-packages/torch/nn/modules/sparse.py", line 117, in forward
    self.norm_type, self.scale_grad_by_freq, self.sparse)
  File "/data/user/external_projects/text-to-speech/mozila-tts-wavernn-cpu/TTS/my_env_gpu/lib/python3.5/site-packages/torch/nn/functional.py", line 1506, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index'

self.embeddingself.embedding = nn.Embedding(num_chars, 512),我需要在某个张量上指定.cuda()还是model.cuda()足够?

0 个答案:

没有答案