我一直想进入torch
并开始使用此tutorial。但是,在使用setmetatable
函数运行代码时,我遇到了堆栈溢出。我相信这是因为大型50000图像输入文件而发生,但我可能错了。我已经尝试编辑luaconf.h
文件以尝试修复它无济于事。除此之外,我正在torch
使用Lua 5.2
并且没有iTorch
,因为我无法设置它。
这是错误:
/home/student/torch/install/bin/lua: C stack overflow
stack traceback:
[C]: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
...
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:66: in main chunk
[C]: in function 'dofile'
...dent/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: in ?
否则我的代码应与 1中的教程相同。将数据加载并标准化为 4。训练神经网络
这是我的代码,抱歉没有最初的代码。
require 'torch'
require 'nn'
require 'paths'
if (not paths.filep("cifar10torchsmall.zip")) then
os.execute('wget -c https://s3.amazonaws.com/torch7/data/cifar10torchsmall.zip')
os.execute('unzip cifar10torchsmall.zip')
end
trainset = torch.load('cifar10-train.t7')
testset = torch.load('cifar10-test.t7')
classes = {'airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'}
print(trainset)
print(#trainset.data)
--itorch.image(trainset.data[100])
--print(classes[trainset.label[100]])
-- -- -- -- -- -- -- -- -- -- --
-- This code is from the previous parts of the tutorial
--net = nn.Sequential()
--net:add(nn.SpatialConvolution(1, 6, 5, 5))
--neecognitiont:add(nn.ReLU())
--net:add(nn.SpatialMaxPooling(2, 2, 2, 2))
--net:add(nn.SpatialConvolution(6, 16, 5, 5))
--net:add(nn.ReLU())
--net:add(nn.SpatialMaxPooling(2, 2, 2, 2))
--net:add(nn.View(16*5*5))
--net:add(nn.Linear(16*5*5, 120))
--net:add(nn.ReLU())
--net:add(nn.Linear(120, 84))
--net:add(nn.ReLU())
--net:add(nn.Linear(84, 10))
--net:add(nn.LogSoftMax())
--print('Lenet5\n' .. net:__tostring())
--input = torch.rand(1, 32, 32)
--output = net:forward(input)
--print(output)
--net:zeroGradParameters()
--gradInput = net:backward(input, torch.rand(10))
--print(#gradInput)
--criterion = nn.ClassNLLCriterion()
--criterion:forward(output, 3)
--gradients = criterion:backward(output, 3)
--gradInput = net:backward(input, gradients)
--m= nn.SpatialConvolution(1, 3, 2, 2)
--print(m.weight)
--print(m.bias)
-- -- -- -- -- -- -- -- --
setmetatable(trainset, {__index = function(t, i)
return {t.data[i], t.lable[i]}
end})
trainset.data = trainset.data:double()
function trainset:size()
return self.data:size(1)
end
print(trainset:size())
print(trainset[33])
redChannel = trainset.data[{ {}, {1}, {}, {} }]
print(#redChannel)
mean = {}
stdv = {}
for i=1,3 do
mean[i] = trainset.data[{ {}, {i}, {}, {} }]:mean()
print('Channel ' .. i .. ', Mean: ' .. mean[i])
trainset.data[{ {}, {i}, {}, {} }]:add(-mean[i])
stdv[i] = trainset.data[{ {}, {i}, {}, {} }]:std()
print('Channel ' .. i .. ', Standard Deviation: ' .. stdv[i])
trainset.data[{ {}, {i}, {}, {} }]:div(stdv[i])
end
net = nn.Sequential()
net:add(nn.SpatialConvolution(3, 6, 5, 5))
net:add(nn.ReLU())
net:add(nn.SpatialMaxPooling(2, 2, 2, 2))
net:add(nn.SpatialConvolution(6, 16, 5, 5))
net:add(nn.ReLU())
net:add(nn.SpatialMaxPooling(2, 2, 2, 2))
net:add(nn.View(16*5*5))
net:add(nn.Linear(16*5*5, 120))
net:add(nn.ReLU())
net:add(nn.Linear(120, 84))
net:add(nn.ReLU())
net:add(nn.Linear(84, 10))
net:add(nn.LogSoftMax())
criterion = nn.ClassNLLCriterion()
trainer = nn.StochasticGradient(net, criterion)
trainer.learningRate = 0.001
trainer.maxIteration = 5
trainer:train(trainset)
答案 0 :(得分:1)
setmetatable,t.lable中有一个拼写错误,而不是t.label