我刚刚启动了Deep Learning AMI (Ubuntu 18.04) Version 27.0 (ami-0dbb717f493016a1a)
实例类型g2.2xlarge
(并为此付费)。我激活了
for PyTorch with Python3 (CUDA 10.1 and Intel MKL) ____________source activate pytorch_p36
当我运行pytorch网络时,我会看到警告
/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/cuda/__init__.py:134: UserWarning:
Found GPU0 GRID K520 which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability that we support is 3.5.
这是真的吗?
这是我将神经网络放在gpu上的代码
if torch.cuda.is_available():
device = torch.device("cuda:0") # you can continue going on here, like cuda:1 cuda:2....etc.
print("Running on the GPU")
else:
device = torch.device("cpu")
print("Running on the CPU")
net = Net(image_height, image_width)
net.to(device)
答案 0 :(得分:1)
我必须使用g3s.xlarge
实例。我猜g2实例使用的是较旧的GPU。
我还必须遵循此https://discuss.pytorch.org/t/oserror-errno-12-cannot-allocate-memory-but-memory-usage-is-actually-normal/56027在数据加载器上制作num_workers=0
。
这是向设备添加张量时的另一个pytorch陷阱https://stackoverflow.com/a/51606286/3614578。