无法将pytorch张量发送到CUDA

时间:2019-01-06 10:11:22

标签: gpu pytorch

我创建了一个火炬张量,但我希望它进入GPU,但不是。太坏了怎么了?

==================================================================================================== FAILURES =====================================================================================================
_____________________________________________________________________________________________ test_model_works_on_gpu _____________________________________________________________________________________________

    def test_model_works_on_gpu():
        with torch.cuda.device(0) as cuda:
            some_random_d_model = 2 ** 9
            five_sentences_of_twenty_words = torch.from_numpy(np.random.random((5, 20, T * d))).float()
            five_sentences_of_twenty_words_mask = torch.from_numpy(np.ones((5, 1, 20))).float()
            pytorch_model = make_sentence_model(d_model=some_random_d_model, T_sgnn=T, d_sgnn=d)

            five_sentences_of_twenty_words.to(cuda)
            five_sentences_of_twenty_words_mask.to(cuda)
            print(type(five_sentences_of_twenty_words), type(five_sentences_of_twenty_words_mask))
            print(five_sentences_of_twenty_words.is_cuda, five_sentences_of_twenty_words_mask.is_cuda)
            pytorch_model.to(cuda)
            output_before_match = pytorch_model(five_sentences_of_twenty_words, five_sentences_of_twenty_words_mask)

            assert output_before_match.shape == (5, some_random_d_model)
            print(type(output_before_match))
>           print(output_before_match.is_cuda, output_before_match.get_device())
E           RuntimeError: get_device is not implemented for tensors with CPU backend

tests/test_model.py:58: RuntimeError
---------------------------------------------------------------------------------------------- Captured stdout call -----------------------------------------------------------------------------------------------
<class 'torch.Tensor'> <class 'torch.Tensor'>
False False
<class 'torch.Tensor'>

也:

>>> torch.cuda.is_available()
True
>>> torch.cuda.device_count()
2

并且:

pip freeze | grep -i torch
torch==1.0.0
torchvision==0.2.1

3 个答案:

答案 0 :(得分:1)

您的问题是以下几行:

five_sentences_of_twenty_words.to(cuda)
five_sentences_of_twenty_words_mask.to(cuda)

.to(device) 仅在应用于模型时就地运行。

当应用于张量时,它必须被赋值:

five_sentences_of_twenty_words = five_sentences_of_twenty_words.to(cuda)
five_sentences_of_twenty_words_mask = five_sentences_of_twenty_words_mask.to(cuda)

答案 1 :(得分:0)

此操作已解决:

def test_model_works_on_gpu():
    device_id = 0
    with torch.cuda.device(device_id) as cuda:
        some_random_d_model = 2 ** 9
        five_sentences_of_twenty_words = torch.from_numpy(np.random.random((5, 20, T * d))).float()
        five_sentences_of_twenty_words_mask = torch.from_numpy(np.ones((5, 1, 20))).float()
        pytorch_model = make_sentence_model(d_model=some_random_d_model, T_sgnn=T, d_sgnn=d)

        five_sentences_of_twenty_words = five_sentences_of_twenty_words.cuda(device_id)
        five_sentences_of_twenty_words_mask = five_sentences_of_twenty_words_mask.cuda(device_id)
        print(type(five_sentences_of_twenty_words), type(five_sentences_of_twenty_words_mask))
        print(five_sentences_of_twenty_words.is_cuda, five_sentences_of_twenty_words_mask.is_cuda)
        pytorch_model = pytorch_model.cuda(device_id)
        output_before_match = pytorch_model(five_sentences_of_twenty_words, five_sentences_of_twenty_words_mask)

        assert output_before_match.shape == (5, some_random_d_model)
        print(type(output_before_match))
        print(output_before_match.is_cuda, output_before_match.get_device())
        assert output_before_match.is_cuda
        assert five_sentences_of_twenty_words.is_cuda
        assert five_sentences_of_twenty_words_mask.is_cuda

我还在模块中使用了add_module方法: https://discuss.pytorch.org/t/when-to-use-add-module-function/10534

答案 2 :(得分:0)

要将“ CPU”张量转换为“ GPU”张量,只需执行以下操作:

<!DOCTYPE HTML>
<html>

<head>
  <link rel="apple-touch-icon" sizes="180x180" href="images\free_horizontal_on_white_by_logaster.jpg">
  <link rel="icon" type="image/jpg" sizes="32x32" href="images\free_horizontal_on_white_by_logaster.jpg">
  <link rel="icon" type="image/jpg" sizes="16x16" href="images\free_horizontal_on_white_by_logaster.jpg">
  <meta name="msapplication-TileColor" content="#da532c">
  <meta name="theme-color" content="#ffffff">
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <meta http-equiv="X-UA-Compatible" content="ie=edge">
  <link rel="stylesheet" href="resolve.css">
  <title>Resolve - Real Women, Real Feedback</title>
</head>

<body>
  <header>
    <div class="container">
      <div id="branding">
        <a href="indexresolve.html"><img src="images/lasttry.png" alt="resolvelogo"></a>
      </div>
      <nav>
        <li><a href="indexresolve.html">Home</a></li>
        < <li><a href="howitworks.html">How It Works</a></li>
          <li><a href="contact.html">Contact</a></li>
          <li><a href="faq.html">FAQ</a></li>
          <li><button id="login" class="button">Log In</button></li>
          <div id="login-modal">
            <div id="login-content">
              <span class="close">&times;</span>
              <img id="login-logo" src="images\free_horizontal_on_white_by_logaster.jpg">
              <form>
                <input class="login-input" type="text" placeholder="username">
                <input class="login-input" type="password" placeholder="password">
                <button>Log In</button>
              </form>
              <p>By clicking log in, you agree to our <a href="terms.html">Terms</a>, <a href="privacy.html">Privacy Policy</a>, and our <a href="cookie.html">Cookie Policy</a>.</p>
            </div>
          </div>
      </nav>
  </header>
  <section>
    <div class="container2">
      <div>
        <h1>Guys</h1>
        <h2>fajfsda klfsdajfodisjflkd oisdjfklewjf oisdjfsakfj akfjfslkdja;fj sd;akfjdkfjsdakfj saifjsdakfs.</h2>
        <button>Get Started</button>
      </div>
      <div>
        <h1>Ladies</h1>
        <h2>dklasdjfs kdsjdlk jfsalkjf las;fjdaa fdaksjdk skjfsidjf akldfjskl fjsdlkfjskdlfjsdifjdkf dkfjsdijf s </h2>
        <button id="login">Get Started</button>
      </div>
      <div class="appad">
        <h2>App Coming Soon!</h2>
      </div>
    </div>
    <script src="resolve.js"></script>
</body>

</html>

这会将张量带到默认的GPU设备。如果您有多个这样的GPU设备,那么您也可以像这样传递device_id:

cpuTensor = cpuTensor.cuda()