我正在尝试从this repo运行代码。我通过改变main.py中的第39/40行从
禁用了cudaparser.add_argument('--type', default='torch.cuda.FloatTensor', help='type of tensor - e.g torch.cuda.HalfTensor')
到
parser.add_argument('--type', default='torch.FloatTensor', help='type of tensor - e.g torch.HalfTensor')
尽管如此,运行代码会给我以下异常:
Traceback (most recent call last):
File "main.py", line 190, in <module>
main()
File "main.py", line 178, in main
model, train_data, training=True, optimizer=optimizer)
File "main.py", line 135, in forward
for i, (imgs, (captions, lengths)) in enumerate(data):
File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 201, in __next__
return self._process_next_batch(batch)
File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 221, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
AssertionError: Traceback (most recent call last):
File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 62, in _pin_memory_loop
batch = pin_memory_batch(batch)
File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 123, in pin_memory_batch
return [pin_memory_batch(sample) for sample in batch]
File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 123, in <listcomp>
return [pin_memory_batch(sample) for sample in batch]
File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 117, in pin_memory_batch
return batch.pin_memory()
File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/tensor.py", line 82, in pin_memory
return type(self)().set_(storage.pin_memory()).view_as(self)
File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/storage.py", line 83, in pin_memory
allocator = torch.cuda._host_allocator()
File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 220, in _host_allocator
_lazy_init()
File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 84, in _lazy_init
_check_driver()
File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 51, in _check_driver
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
花了一些时间查看Pytorch github中的问题,但无济于事。请帮帮忙?
答案 0 :(得分:3)
在macOS上,删除.cuda()
对我有用。
答案 1 :(得分:2)
如果查看data.py文件,可以看到函数:
def get_iterator(data, batch_size=32, max_length=30, shuffle=True, num_workers=4, pin_memory=True):
cap, vocab = data
return torch.utils.data.DataLoader(
cap,
batch_size=batch_size, shuffle=shuffle,
collate_fn=create_batches(vocab, max_length),
num_workers=num_workers, pin_memory=pin_memory)
在main.py文件中被调用两次以获取train和dev数据的迭代器。如果你在pytorch中看到DataLoader类,那么有一个名为:
的参数pin_memory(bool,optional) - 如果为True,数据加载器会在返回之前将张量复制到CUDA固定内存中。
True
函数默认为get_iterator
。结果你得到了这个错误。当您调用pin_memory
函数时,您可以简单地将False
参数值传递为get_iterator
。
train_data = get_iterator(get_coco_data(vocab, train=True),
batch_size=args.batch_size,
...,
...,
...,
pin_memory=False)
答案 2 :(得分:1)
所以我正在使用Mac,尝试使用cuda之类的工具创建神经网络
net = nn.Sequential(
nn.Linear(28*28, 100),
nn.ReLU(),
nn.Linear(100, 100),
nn.ReLU(),
nn.Linear(100, 10),
nn.LogSoftmax()
).cuda()
我的错误是我尝试创建nn,而Mac没有CUDA。
因此,如果有人遇到相同的问题,只需删除.cuda()
,您的代码就可以工作。
编辑:
没有CUDA,您将无法进行GPU计算。不幸的是,对于拥有Intel集成显卡的人来说,CUDA无法安装,因为它仅与NVIDIA GPU兼容。
如果您具有NVIDIA图形卡,则可能是我们的系统上已经安装了CUDA,否则,您可以安装它。
您可以购买与计算机兼容的外部图形,但仅此一项就需要300美元,更不用说连接性问题了。
否则,您可以使用:
Google合作社,Kaggle内核(免费)
AWS,GCP(免费积分),PaperSpace(付费)
答案 3 :(得分:0)
就我而言,我没有在Anaconda环境中安装启用了Cuda的PyTorch。请注意,您需要启用CUDA的GPU才能正常工作。
点击此链接即可为您拥有的特定版本的Cuda安装PyTorch:https://pytorch.org/get-started/locally/
就我而言,我安装了以下版本: 康达安装pytorch torchvision torchaudio cudatoolkit = 10.2 -c pytorch