考虑以下网络:
%%time
import torch
from torch.autograd import grad
import torch.nn as nn
import torch.optim as optim
class net_x(nn.Module):
def __init__(self):
super(net_x, self).__init__()
self.fc1=nn.Linear(1, 20)
self.fc2=nn.Linear(20, 20)
self.out=nn.Linear(20, 400) #a,b,c,d
def forward(self, x):
x=torch.tanh(self.fc1(x))
x=torch.tanh(self.fc2(x))
x=self.out(x)
return x
nx = net_x()
#input
val = 100
t = torch.rand(val, requires_grad = True) #input vector
t = torch.reshape(t, (val,1)) #reshape for batch
#method
dx = torch.autograd.functional.jacobian(lambda t_: nx(t_), t)
这个输出
CPU times: user 11.1 s, sys: 3.52 ms, total: 11.1 s
Wall time: 11.1 s
但是,当我改为使用带有 .to(device)
的 GPU 时:
%%time
import torch
from torch.autograd import grad
import torch.nn as nn
import torch.optim as optim
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
class net_x(nn.Module):
def __init__(self):
super(net_x, self).__init__()
self.fc1=nn.Linear(1, 20)
self.fc2=nn.Linear(20, 20)
self.out=nn.Linear(20, 400) #a,b,c,d
def forward(self, x):
x=torch.tanh(self.fc1(x))
x=torch.tanh(self.fc2(x))
x=self.out(x)
return x
nx = net_x()
nx.to(device)
#input
val = 100
t = torch.rand(val, requires_grad = True) #input vector
t = torch.reshape(t, (val,1)).to(device) #reshape for batch
#method
dx = torch.autograd.functional.jacobian(lambda t_: nx(t_), t)
输出:
CPU times: user 18.6 s, sys: 1.5 s, total: 20.1 s
Wall time: 19.5 s
更新 1: 检查将输入和模型移动到设备的过程的时间:
%%time
nx.to(device)
t.to(device)
输出:
CPU times: user 2.05 ms, sys: 0 ns, total: 2.05 ms
Wall time: 2.13 ms
更新 2:
看起来@Gulzar 是对的。我将批量大小更改为 1000 (val=1000
) 并且 CPU 输出:
Wall time: 8min 44s
虽然 GPU 输出:
Wall time: 3min 12s
答案 0 :(得分:3)
GPU 是“较弱”的计算机,其计算核心比 CPU 多得多。
数据必须每隔一段时间以“昂贵”的方式从 RAM 内存传递到 GRAM,以便他们进行处理。
如果数据“大”,并且可以对这些数据进行并行处理,则那里的计算速度可能会更快。
如果数据“不够大”,传输数据的成本或使用较弱的内核并同步它们的成本可能会超过并行化的好处。
GPU 什么时候有用?