如何计算Torch中任意层/重量的损失梯度?

时间:2016-04-06 20:08:07

标签: lua machine-learning neural-network backpropagation torch

我正从Theano过境到火炬。所以请耐心等待。在Theano中,即使是特定的权重,也可以直接计算损失函数的梯度。我想知道,怎么能在火炬上做到这一点?

假设我们有以下代码生成一些数据/标签并定义模型:

t = require 'torch'
require 'nn'
require 'cunn'
require 'cutorch'



-- Generate random labels
function randLabels(nExamples, nClasses)
    -- nClasses: number of classes
    -- nExamples: number of examples
    label = {}
    for i=1, nExamples do
        label[i] = t.random(1, nClasses)
    end
    return t.FloatTensor(label)
end

inputs = t.rand(1000, 3, 32, 32) -- 1000 samples, 3 color channels
inputs = inputs:cuda()
labels = randLabels(inputs:size()[1], 10)
labels = labels:cuda()

net = nn.Sequential()
net:add(nn.SpatialConvolution(3, 6, 5, 5))
net:add(nn.ReLU())
net:add(nn.SpatialMaxPooling(2, 2, 2, 2))
net:add(nn.View(6*14*14))
net:add(nn.Linear(6*14*14, 300))
net:add(nn.ReLU())
net:add(nn.Linear(300, 10))
net = net:cuda()

-- Loss
criterion = nn.CrossEntropyCriterion()
criterion = criterion:cuda()
forwardPass = net:forward(inputs)
net:zeroGradParameters()
dEd_WeightsOfLayer1 -- How to compute this?



forwardPass = nil
net = nil
criterion = nil
inputs = nil
labels = nil

collectgarbage()

如何计算卷积层的梯度w.r.t权重?

1 个答案:

答案 0 :(得分:0)

好的,我找到了答案(感谢关于Torch7 Google小组的alban desmaison)。 问题中的代码有一个错误,但不起作用。所以我重新编写代码。以下是关于每个节点/参数的渐变的方法:

t = require 'torch'
require 'cunn'
require 'nn'
require 'cutorch'



-- A function to generate some random labels
function randLabels(nExamples, nClasses)
    -- nClasses: number of classes
    -- nExamples: number of examples
    label = {}
    for i=1, nExamples do
        label[i] = t.random(1, nClasses)
    end
    return t.FloatTensor(label)
end

-- Declare some variables
nClass = 10
kernelSize = 5
stride = 2
poolKernelSize = 2
nData = 100
nChannel = 3
imageSize = 32

-- Generate some [random] data
data = t.rand(nData, nChannel, imageSize, imageSize) -- 100 Random images with 3 channels
data = data:cuda() -- Transfer to the GPU (remove this line if you're not using GPU)
label = randLabels(data:size()[1], nClass)
label = label:cuda() -- Transfer to the GPU (remove this line if you're not using GPU)

-- Define model
net = nn.Sequential()
net:add(nn.SpatialConvolution(3, 6, 5, 5))
net:add(nn.ReLU())
net:add(nn.SpatialMaxPooling(poolKernelSize, poolKernelSize, stride, stride))
net:add(nn.View(6*14*14))
net:add(nn.Linear(6*14*14, 350))
net:add(nn.ReLU())
net:add(nn.Linear(350, 10))
net = net:cuda() -- Transfer to the GPU (remove this line if you're not using GPU)

criterion = nn.CrossEntropyCriterion()
criterion = criterion:cuda() -- Transfer to the GPU (remove this line if you're not using GPU)

-- Do forward pass and get the gradient for each node/parameter:

net:forward(data) -- Do the forward propagation
criterion:forward(net.output, label) -- Computer the overall negative log-likelihood error
criterion:backward(net.output, label); -- Don't forget to put ';'. Otherwise you'll get everything printed on the screen
net:backward(data, criterion.gradInput); -- Don't forget to put ';'. Otherwise you'll get everything printed on the screen

-- Now you can access the gradient values

layer1InputGrad = net:get(1).gradInput
layer1WeightGrads = net:get(1).gradWeight

net = nil
data = nil
label = nil
criterion = nil

复制并粘贴代码,它就像魅力一样:)