反向传播使平均输出为0.5

时间:2018-12-10 08:34:21

标签: python machine-learning neural-network backpropagation

我有一个正在使用python开发的神经网络模型。反向传播似乎不起作用,但是我已经摆弄了一段时间。通过大量的培训,即使有足够的数据,输出也将平均为0.5。这是用于反向传播的代码和数据,只是一个简单的AND门输出: 数据:

data = [[[1, 1], 1],
       [[1, 0], 0],
       [[0, 1], 0],
       [[0, 0], 0]]

反向传播:

    def backpropagate(self, input, output, learning_rate=0.2):
    expected = self.feed_forward(input)  # expected output
    state = self.feed_full(input)
    error = output - expected  # error
    delta = error * self.activation_function(expected, True)
    for weight_layer in reversed(range(len(self.weights))):
        error = delta.dot(self.weights[weight_layer].T)  # updating error
        delta = error * self.activation_function(state[weight_layer], True)  # updating delta for each layer
        self.weights[weight_layer] += np.transpose(state[weight_layer]).dot(delta) * learning_rate

前馈所有状态并仅输出:

def feed_forward(self, x):
    ret = x
    for weight_layer in self.weights:
        ret = self.activation_function(np.dot(ret, weight_layer))
    return ret

def feed_full(self, x):
    state = x
    activations = [x]
    for weight_layer in self.weights:
        state = self.activation_function(np.dot(state, weight_layer))
        activations.append(state)
    return activations

net的形状为[2,3,1],我正在尝试对其进行设计,以使形状可扩展,以便将其用于其他项目。只需要反向支撑部分。谢谢。

0 个答案:

没有答案