Torch7 ClassNLLCriterion()

时间:2016-10-14 15:21:34

标签: torch

我一直在努力让我的代码工作,但尽管输入和输出是一致的,但它仍然失败。

有人提到某个地方,classnllcliterion不接受小于或等于零的值。

我应该怎么去训练这个网络。 这是我的代码的一部分,我想它在向后的时候失败,模型输出可能包含-ve值。 但是,当我切换到meansquarederror标准时,代码工作正常。

ninputs = 22; noutputs = 3
hidden =22


model = nn.Sequential() 
model:add(nn.Linear(ninputs, hidden)) -- define the only module
model:add(nn.Tanh())
model:add(nn.Linear(hidden, noutputs))
model:add(nn.LogSoftMax())
----------------------------------------------------------------------
-- 3. Define a loss function, to be minimized.

-- In that example, we minimize the Mean Square Error (MSE) between
-- the predictions of our linear model and the groundtruth available
-- in the dataset.

-- Torch provides many common criterions to train neural networks.

criterion = nn.ClassNLLCriterion()


----------------------------------------------------------------------
-- 4. Train the model
i=1
mean = {}
std = {}





-- To minimize the loss defined above, using the linear model defined
-- in 'model', we follow a stochastic gradient descent procedure (SGD).

-- SGD is a good optimization algorithm when the amount of training data
-- is large, and estimating the gradient of the loss function over the 
-- entire training set is too costly.

-- Given an arbitrarily complex model, we can retrieve its trainable
-- parameters, and the gradients of our loss function wrt these 
-- parameters by doing so:

x, dl_dx = model:getParameters()

-- In the following code, we define a closure, feval, which computes
-- the value of the loss function at a given point x, and the gradient of
-- that function with respect to x. x is the vector of trainable weights,
-- which, in this example, are all the weights of the linear matrix of
-- our model, plus one bias.

feval = function(x_new)
   -- set x to x_new, if differnt
   -- (in this simple example, x_new will typically always point to x,
   -- so the copy is really useless)
   if x ~= x_new then
      x:copy(x_new)
   end

   -- select a new training sample
   _nidx_ = (_nidx_ or 0) + 1
   if _nidx_ > (#csv_tensor)[1] then _nidx_ = 1 end

   local sample = csv_tensor[_nidx_]
   local target = sample[{ {23,25} }]
   local inputs = sample[{ {1,22} }]    -- slicing of arrays.

   -- reset gradients (gradients are always accumulated, to accommodate 
   -- batch methods)
   dl_dx:zero()

   -- evaluate the loss function and its derivative wrt x, for that sample
   local loss_x = criterion:forward(model:forward(inputs), target)
   model:backward(inputs, criterion:backward(model.output, target))

   -- return loss(x) and dloss/dx
   return loss_x, dl_dx
end

收到的错误是

  

/首页/暴风雨/手电筒/安装/斌/ luajit:   /home/stormy/torch/install/share/lua/5.1/nn/THNN.lua:110:断言   `cur_target> = 0&& cur_target< n_classes'失败。在   /home/stormy/torch/extra/nn/lib/THNN/generic/ClassNLLCriterion.c:45   堆栈追溯:[C]:在功能' v'     /home/stormy/torch/install/share/lua/5.1/nn/THNN.lua:110:在功能上   ' ClassNLLCriterion_updateOutput'     ... rmy / torch / install / share / lua / 5.1 / nn / ClassNLLCriterion.lua:43:in   功能'前进' nn.lua:178:在功能' opfunc'     /home/stormy/torch/install/share/lua/5.1/optim/sgd.lua:44:in   功能' sgd' nn.lua:222:在主要部分[C]:在功能' dofile'     ... ormy / torch / install / lib / luarocks / rocks / trepl / scm-1 / bin / th:145:in   主块[C]:在0x00405d50

1 个答案:

答案 0 :(得分:1)

错误消息来自传入超出范围的目标。 例如:

m = nn.ClassNLLCriterion()
nClasses = 3
nBatch = 10
net_output = torch.randn(nBatch, nClasses)
targets = torch.Tensor(10):random(1,3) -- targets are between 1 and 3
m:forward(net_output, targets)
m:backward(net_output, targets)

Now, see the bad example (that you suffer from)
targets[5] = 13 -- an out of bounds set of classes
targets[4] = 0 -- an out of bounds set of classes
-- these lines below will error
m:forward(net_output, targets)
m:backward(net_output, targets)