在解释问题之前,让我解释一下问题的背景。我的任务是拍摄一张标有微笑的图像。具有微笑的文件被标记,例如100a.jpg和100b.jpg。在哪里' a'用于表示没有微笑的图像,并且' b'用于表示带笑容的图像。因此,我希望制作3层网络,即第1层=输入节点,第2层=隐藏层,第3层=输出节点。
一般算法是:
公式1:
公式2:
现在问题非常简单......我的代码永远不会收敛,因此我没有可用于测试网络的权重向量。问题是我没有为什么发生这种错误...这是我显示的错误,显然没有收敛:
Training done full cycle
0.5015
Training done full cycle
0.5015
Training done full cycle
0.5015
Training done full cycle
0.5038
Training done full cycle
0.5038
Training done full cycle
0.5038
Training done full cycle
0.5038
Training done full cycle
0.5038
这是我的matlab代码:
function [thetaLayer12,thetaLayer23]=trainSystem()
%This is just the directory where I read the images from
files = dir('train1/*jpg');
filelength = length(files);
%Here I create my weights between input layer and hidden layer and then
%from the hidden layer to the output node. The reason the value 481 is used
%is because there will be 480 input nodes + 1 bias node. The reason 200 is
%used is for the number of hidden layer nodes
thetaLayer12 = unifrnd (-1, 1 ,[481,200]);
thetaLayer23 = unifrnd (-1, 1 ,[201,1]);
%Learning Rate value
alpha = 0.00125;
%Initalize Convergence Error
globalError = 100;
while(globalError > 0.001)
globalError = 0;
%Run through all the files in my training set. 400 Files to be exact.
for i = 1 : filelength
%Here we find out if the image has a smile in it or not. If there
%Images are labled 1a.jpg, 1b.jpg where images with an 'a' in them
%have no smile and images with a 'b' in them have a smile.
y = isempty(strfind(files(i).name,'a'));
%We read in the image
imageBig = imread(strcat('train1/',files(i).name));
%We resize the image to 24x20
image = imresize(imageBig,[24 20]);
%I then take the 2D image and map it to a 1D vector
inputNodes = reshape(image,480,1);
%A bias value of 1 is added to the top of the vector
inputNodes = [1;inputNodes];
%Forward Propogation is applied the input layer and the hidden
%layer
outputLayer2 = logsig(double(inputNodes')* thetaLayer12);
%Here we then add a bias value to hidden layer nodes
inputNodes2 = [1;outputLayer2'];
%Here we then do a forward propagation from the hidden layer to the
%output node to obtain a single value.
finalResult = logsig(double(inputNodes2')* thetaLayer23);
%Backward Propogation is then applied to the weights between the
%output node and the hidden layer.
thetaLayer23 = thetaLayer23 - alpha*(finalResult - y)*inputNodes2;
%Backward Propogation is then applied to the weights between the
%hidden layer and the input nodes.
thetaLayer12 = thetaLayer12 - (((alpha*(finalResult-y)*thetaLayer23(2:end))'*inputNodes2(2:end))*(1-inputNodes2(2:end))*double(inputNodes'))';
%I sum the error across each iteration over all the images in the
%folder
globalError = globalError + abs(finalResult-y);
if(i == 400)
disp('Training done full cycle');
end
end
%I take the average error
globalError = globalError / filelength;
disp(globalError);
end
end
任何帮助都会非常感激!!!!
答案 0 :(得分:0)
训练任何机器学习算法的成功在很大程度上取决于您用于训练算法的训练示例的数量。你从来没有确切地说过你有多少训练样例,但是在面部检测的情况下,可能需要大量的例子(如果它可以工作的话)。
以这种方式思考,计算机科学家会向您展示两个像素强度值阵列。他告诉你哪一个有一个明喻,哪个没有。他向你展示了两个,并要求你告诉他哪一个有一个明喻。
幸运的是,我们可以在某种程度上解决这个问题。您可以使用自动编码器或字典学习器(如稀疏编码)来查找数据中的更高级别结构。他可以向你显示边缘甚至是身体部位,而不是计算机科学家展示你的像素强度。然后,您可以将其用作神经网络的输入,但可能仍需要大量的训练示例(但比之前更少)。
比起类比是受到斯坦福大学Ng教授关于无监督特征学习的讨论的启发。