我曾尝试在Windows 10上安装CUDA,以在我的GPU(NVIDIA GeForce 710)中训练神经网络,但是当我尝试加载初始模型时出现以下错误。
这是我正在运行的代码:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device
"""# 1. Create the classifier"""
C = nn.Sequential(Flatten(), nn.Linear(784,200), nn.ReLU(),
nn.Linear(200,100), nn.ReLU(),
nn.Linear(100,100), nn.ReLU(),
nn.Linear(100,10))
"""Upload the trained model"""
C.load_state_dict(torch.load("C.pt",map_location='cuda'))
这是我得到的错误:
C.load_state_dict(torch.load("C.pt",map_location='cuda'))
Traceback (most recent call last):
File "<ipython-input-2-358f76f483ed>", line 1, in <module>
C.load_state_dict(torch.load("C.pt",map_location='cuda'))
File "C:\Users\usuario\anaconda3\lib\site-packages\torch\serialization.py", line 368, in load
return _load(f, map_location, pickle_module)
File "C:\Users\usuario\anaconda3\lib\site-packages\torch\serialization.py", line 542, in _load
result = unpickler.load()
File "C:\Users\usuario\anaconda3\lib\site-packages\torch\serialization.py", line 505, in persistent_load
data_type(size), location)
File "C:\Users\usuario\anaconda3\lib\site-packages\torch\serialization.py", line 385, in restore_location
return default_restore_location(storage, map_location)
File "C:\Users\usuario\anaconda3\lib\site-packages\torch\serialization.py", line 114, in default_restore_location
result = fn(storage, location)
File "C:\Users\usuario\anaconda3\lib\site-packages\torch\serialization.py", line 96, in _cuda_deserialize
return obj.cuda(device)
File "C:\Users\usuario\anaconda3\lib\site-packages\torch\_utils.py", line 68, in _cuda
with torch.cuda.device(device):
File "C:\Users\usuario\anaconda3\lib\site-packages\torch\cuda\__init__.py", line 229, in __enter__
_lazy_init()
File "C:\Users\usuario\anaconda3\lib\site-packages\torch\cuda\__init__.py", line 162, in _lazy_init
torch._C._cuda_init()
RuntimeError: cuda runtime error (30) : unknown error at ..\aten\src\THC\THCGeneral.cpp:87
我已经安装了cudNN,这些是我正在使用的版本。
Python 3.7.6
CUDA 8.0
Pytorch 1.0.1
答案 0 :(得分:0)
答案 1 :(得分:0)
无需重新启动PC。突出显示将这些命令粘贴到命令终端中,该错误将被消除。
sudo rmmod nvidia_uvm
sudo rmmod nvidia
sudo modprobe英伟达
sudo modprobe nvidia_uvm