嗨,我想知道如何在我的CPU而不是GPU上运行机器学习代码?
我尝试使设置文件上的GPU错误,但无法修复它。
### 全局设置GPU = False # running on GPU is highly suggested
CLEAN = False # set to "True" if you want to clean the temporary large files after generating result
APP = "classification" # Do not change! mode choide: "classification", "imagecap", "vqa". Currently "imagecap" and "vqa" are not supported.
CATAGORIES = ["object", "part"] # Do not change! concept categories that are chosen to detect: "object", "part", "scene", "material", "texture", "color"
map_location='cpu'
CAM_THRESHOLD = 0.5 # the threshold used for CAM visualization
FONT_PATH = "components/font.ttc" # font file path
FONT_SIZE = 26 # font size
SEG_RESOLUTION = 7 # the resolution of cam map
BASIS_NUM = 7
Traceback (most recent call last):
File "test.py", line 22, in <module>
model = loadmodel()
File "/home/joshuayun/Desktop/IBD/loader/model_loader.py", line 44, in loadmodel
checkpoint = torch.load(settings.MODEL_FILE)
File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 387, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 574, in _load
result = unpickler.load()
File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 537, in persistent_load
deserialized_objects[root_key] = restore_location(obj, location)
File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 119, in default_restore_location
result = fn(storage, location)
File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 95, in _cuda_deserialize
device = validate_cuda_device(location)
File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 79, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.
答案 0 :(得分:0)
如果您使用的是从nn.Module
扩展过来的模型,则可以将整个模型移到CPU或GPU上:
device = torch.device("cuda")
model.to(device)
# or
device = torch.device("cpu")
model.to(device)
如果您只想移动张量:
x = torch.Tensor(10).cuda()
# or
x = torch.Tensor(10).cpu()
我希望对您有帮助
答案 1 :(得分:0)
如果我没记错的话,您将在代码model = loadmodel()
中收到上述错误。我不知道您在loadmodel()
内部正在做什么,但是您可以尝试以下几点:
defaults.device
设置为cpu
。要完全确定,请添加一个torch.cuda.set_device('cpu')
torch.load(model_weights)
更改为torch.load(model_weights, map_location=torch.device('cpu'))