Pytorch火车和Eval不同样本量

时间:2019-08-14 08:29:15

标签: python pytorch

我正在学习pytorch,并且需要设置以下(缩写)代码以进行建模:

# define the model class for a neural net with 1 hidden layer
class myNN(nn.Module):
    def __init__(self, D_in, H, D_out):
        super(myNN, self).__init__()
        self.lin1 = nn.Linear(D_in,H)
        self.lin2 = nn.Linear(H,D_out)
    def forward(self,X):
        return torch.sigmoid(self.lin2(torch.sigmoid(self.lin1(x))))

# now make the datasets & dataloaders
batchSize = 5
# Create the data class
class Data(Dataset):
    def __init__(self, x, y):
        self.x = torch.FloatTensor(x)
        self.y = torch.Tensor(y.astype(int))
        self.len = self.x.shape[0]
        self.p = self.x.shape[1]
    def __getitem__(self, index):      
        return self.x[index], self.y[index]
    def __len__(self):
        return self.len
trainData = Data(trnX, trnY)
trainLoad = DataLoader(dataset = trainData, batch_size = batchSize)
testData = Data(tstX, tstY)
testLoad = DataLoader(dataset = testData, batch_size = len(testData))

# define the modeling objects
hiddenLayers = 30
learningRate = 0.1
model = myNN(p,hiddenLayers,1)
print(model)
optimizer = torch.optim.SGD(model.parameters(), lr = learningRate)
loss = nn.BCELoss()

带有trnX.shape=(70, 2)trnY.shape=(70,)tstX.shape=(30,2)tstY.shape=(30,)。培训代码为:

# train!
epochs = 1000
talkFreq = 0.2
trnLoss = [np.inf]*epochs
tstLoss = [np.inf]*epochs
for i in range(epochs):
    # train with minibatch gradient descent
    for x, y in trainLoad:
        # forward step
        yhat = model(x)
        # compute loss (not storing for now, will do after minibatching)
        l = loss(yhat, y)
        # backward step
        optimizer.zero_grad()
        l.backward()
        optimizer.step()
    # evaluate loss on training set
    yhat = model(trainData.x)
    trnLoss[i] = loss(yhat, trainData.y)
    # evaluate loss on testing set
    yhat = model(testData.x)
    tstLoss[i] = loss(yhat, testData.y)

数据集trainDatatestData分别具有70和30个观测值。这可能只是一个新手问题,但是当我运行训练单元时,它在trnLoss[i] = loss(yhat, trainData.y)行中出现错误

ValueError: Target and input must have the same number of elements. target nelement (70) != input nelement (5)

当我检查yhat=model(trainData.x)行的输出时,我发现yhat是带有batchSize个元素的张量,尽管事实是trainData.x.shape = torch.Size([70, 2])

如何使用小批量梯度下降迭代地训练模型,然后使用模型在完整的训练和测试集上计算损失和准确性?我尝试在小型批处理迭代之前设置model.train(),然后在评估代码之前将model.eval()设置为无效。

1 个答案:

答案 0 :(得分:0)

myNN.forward()中,您将小写的x作为输入传递给self.lin1,而该函数的输入参数被命名为大写字母X。小写x是在trainload的for循环中定义的一种全局变量,因此您不会遇到任何语法错误,但是您打算传递的值不会传递给self.lin1

可能我还建议您考虑将model.eval()with torch.no_grad()用于测试代码。这里不是绝对必要的,但是会更有意义。