我不认为我的问题与this one重复,因为我的实施已经存在偏差。
我尝试实现感知器及其在Erlang中识别线性斜率的训练。问题是没有经过适当的培训。在50个时代之后,它猜测的值仍然是大约50%的正确值。
起始权重在列表[X_weight, Y_Weight, Bias_weight]
中提供,训练集在另一个列表[X,Y,Desired_guess]
中提供,其中X和Y是整数,如果坐标在行下,则Desired_guess为-1或1,如果它超过了线。
首先是新权重的计算:
% Exported starting clause
% Inputs are - List of input values for one perceptron ([X,Y,Bias]), A list of weights corresponding to the inputs [X_weight, Y_weight, Bias_weight], the learning constant and the error (Desired-Guess)
train_perceptron([InputsH|InputsT], [WeightsH|WeightsT], Learning_constant, Error) ->
train_perceptron(InputsT, WeightsT, Learning_constant, Error,
[WeightsH + (Learning_constant * Error) * InputsH]).
% Not exported clause called by train_perceptron/4 This also has a list of the new adjusted weights.
% When the tail of input lists are empty lists it is the last value, and thereby the Bias
train_perceptron([InputsH|[]], [WeightsH|[]], Learning_constant, Error, Adjusted_weights) ->
train_perceptron([], [], Learning_constant, Error,
Adjusted_weights ++ [WeightsH + Learning_constant * Error]);
%Normal cases, calcualting the new weights and add them to the Adjusted_weights
train_perceptron([InputsH|InputsT], [WeightsH|WeightsT], Learning_constant, Error, Adjusted_weights) ->
train_perceptron(InputsT, WeightsT,Learning_constant, Error,
Adjusted_weights ++ [WeightsH + (Learning_constant * Error) * InputsH]);
%Base case the lists are empty, no more to do. Return the Adjusted_weights
train_perceptron([], [],_, _, Adjusted_weights) ->
Adjusted_weights.
这是调用train_perceptron函数的函数
line_trainer(Weights,[],_) ->
Weights;
line_trainer(Weights, [{X,Y,Desired}|TST], Learning_constant)->
Bias = 1,
Error = Desired - feedforward([X,Y,Bias],Weights),
Adjusted_weights = train_perceptron([X,Y,Bias], Weights, Learning_constant, Error),
line_trainer(Adjusted_weights, TST, Learning_constant).
一个解决方案,可能是有人为我提供了这种功能的训练集,每个时期有三个起始重量和输出。这可以帮助我自己调试。
答案 0 :(得分:0)
这实际上有效。我提供的训练集很小。通过更大的训练集和大约20个时期,全局误差收敛到0。