AssertionError:未在启用CUDA的情况下编译Torch

时间:2019-01-02 22:52:37

标签: pytorch

来自https://pytorch.org/

要在MacOS上安装pytorch,请注意以下内容:

conda install pytorch torchvision -c pytorch
# MacOS Binaries dont support CUDA, install from source if CUDA is needed

为什么要在未启用cuda的情况下安装pytorch?

我问的原因是我收到错误消息:

  

-------------------------------------------------- ---------------------------- AssertionError Traceback(最近的呼叫   最后)在()        78#预测= outputs.data.max(1)[1]        79   -> 80输出= model(torch.tensor([[1,1]])。float()。cuda())        81位预测= output.data.max(1)[1]        82

     

〜/ anaconda3 / lib / python3.6 / site-packages / torch / cuda / init .py in   _lazy_init()       第159章       160“无法在派生子进程中重新初始化CUDA。” + msg)   -> 161 _check_driver()       162手电筒_C._cuda_init()       163 _cudart = _load_cudart()

     

〜/ anaconda3 / lib / python3.6 / site-packages / torch / cuda / init .py in   _check_driver()        73 def _check_driver():        74如果没有hasattr(torch._C,'_cuda_isDriverSufficient'):   ---> 75提高AssertionError(“未在启用CUDA的情况下编译炬管”)        76如果不是割炬._C._cuda_isDriverSufficient():        77如果torch._C._cuda_getDriverVersion()== 0:

     

AssertionError:火炬未在启用CUDA的情况下编译

尝试执行代码时:

x = torch.tensor([[0,0] , [0,1] , [1,0]]).float()
print(x)

y = torch.tensor([0,1,1]).long()
print(y)

my_train = data_utils.TensorDataset(x, y)
my_train_loader = data_utils.DataLoader(my_train, batch_size=2, shuffle=True)

# Device configuration
device = 'cpu'
print(device)

# Hyper-parameters 
input_size = 2
hidden_size = 100
num_classes = 2


learning_rate = 0.001

train_dataset = my_train

train_loader = my_train_loader

pred = []


for i in range(0 , model_iters) : 
    # Fully connected neural network with one hidden layer
    class NeuralNet(nn.Module):
        def __init__(self, input_size, hidden_size, num_classes):
            super(NeuralNet, self).__init__()
            self.fc1 = nn.Linear(input_size, hidden_size) 
            self.relu = nn.ReLU()
            self.fc2 = nn.Linear(hidden_size, num_classes)  

        def forward(self, x):
            out = self.fc1(x)
            out = self.relu(out)
            out = self.fc2(out)
            return out

    model = NeuralNet(input_size, hidden_size, num_classes).to(device)

    # Loss and optimizer
    criterion = nn.CrossEntropyLoss()
    optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)  

    # Train the model
    total_step = len(train_loader)
    for epoch in range(num_epochs):
        for i, (images, labels) in enumerate(train_loader):  
            # Move tensors to the configured device
            images = images.reshape(-1, 2).to(device)
            labels = labels.to(device)

            # Forward pass
            outputs = model(images)
            loss = criterion(outputs, labels)

            # Backward and optimize
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
{:.4f}'.format(epoch+1, num_epochs, i+1, total_step, loss.item()))

    output = model(torch.tensor([[1,1]]).float().cuda())

要解决此错误,我需要从已安装cuda的源安装pytorch吗?

1 个答案:

答案 0 :(得分:5)

总结并扩展评论:

  • CUDA是Nvidia专有的(显然未经许可)技术,允许在GPU处理器上进行通用计算。
  • 很少有Macbook Pro具有支持Nvidia CUDA的GPU。看一下here,看看您的MBP是否具有Nvidia GPU。然后,查看表here,查看该GPU是否支持CUDA
  • iMac,iMac Pro和Mac Pro的情况相同。
  • 因此,在MacOS上,默认情况下PyTorch在安装时不支持CUDA

此PyTorch github问题提到很少有Mac配备Nvidia处理器:https://github.com/pytorch/pytorch/issues/30664

如果您的Mac确实具有支持CUDA的GPU,那么要在MacOS上使用CUDA命令,您将需要使用正确的命令行选项从源代码重新编译pytorch。