火炬输出中的二元逻辑回归不变

时间:2016-04-14 19:18:52

标签: neural-network deep-learning torch

火炬的逻辑回归有问题。 当我使用二元逻辑回归来训练模型时,输出保持不变如下:

linLayer = nn.Linear(3,1)
sigmoid=nn.Sigmoid()
model = nn.Sequential()
model:add(linLayer)
model:add(sigmoid)

criterion = nn.BCECriterion()

使用optim方法

for i = 1,epochs do
    current_loss = 0
for i = 1,(#dataset_inputs)[1] do

    _,fs = optim.sgd(feval,x,sgd_params)

    current_loss = current_loss + fs[1]
end
current_loss = current_loss / (#dataset_inputs)[1]
print('epoch = ' .. i .. 
 ' of ' .. epochs .. 
 ' current loss = ' .. current_loss)

end

SGD参数是:

sgd_params = {
   learningRate = 1e-6,
   learningRateDecay = 1e-4,
   weightDecay = 0,
   momentum = 0
} 

看起来像:

epoch = 1 of 2000 current loss = 8.7728492043066    
epoch = 2 of 2000 current loss = 8.7728492043066    
epoch = 3 of 2000 current loss = 8.7728492043066    
epoch = 4 of 2000 current loss = 8.7728492043066    
epoch = 5 of 2000 current loss = 8.7728492043066    
epoch = 6 of 2000 current loss = 8.7728492043066    
epoch = 7 of 2000 current loss = 8.7728492043066 

...... 并且模型的输出是:

0
0
0
0
0
0
.
.
.

有时是:

1
1
1
.
.
.

我改变学习率或学习率衰减,它根本不起作用。我不知道这个问题。你们知道这里有什么问题吗?

虽然这是我的feval功能。

feval = function(x_new)
    if x ~= x_new then
      x:copy(x_new)
   end

   _nidx_ = (_nidx_ or 0) + 1
   if _nidx_ > (#dataset_inputs)[1] then _nidx_ = 1 end

   local sample = dataset[_nidx_]
   local inputs = sample[{ {2,4} }]
   local target = sample[{ {1} }] 

   dl_dx:zero()

   local loss_x = criterion:forward(model:forward(inputs),target)
   model:backward(inputs, criterion:backward(model.output,target))

   -- return loss(x) and dloss/dx
   return loss_x, dl_dx
end

0 个答案:

没有答案