深度学习模型在30次迭代后停止学习

时间:2020-08-25 16:18:27

标签: python machine-learning deep-learning neural-network

我正在尝试使用MNIST数据集的书面数字来实现在Coursera上完成“神经网络和深度学习”课程时所建立的深度学习模型。在此过程中,它可以很好地识别猫,因此我知道整个模型可以一起工作,并且我修改了所有输入数据和输出层,以使输出为大小为10的数组,并且数组形状均与尺寸匹配他们在课程中。

我做了一些实验,遇到了一个很奇怪的问题。我的成本随时间变化图如下所示: mnist training cost over time

我通常希望曲线更倾斜,趋向于更接近于零的值,而且非常急剧的转弯也很奇怪。我还应该指出,它不是几百个,在x轴上是几十个。

我的NN的形状为[784,200,50,10],我假设这不是问题,但我真正要寻找的是一位在ml方面更有经验的人来解释为什么会发生这种情况

我的模型现在是这样的

# Initialisation of parameters
parameters = initialize_parameters_deep(layers_dims)

for i in range(0, num_iterations):

    # Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
    AL, caches = L_model_forward(train_data, parameters, layers_dims[-1])

    # Compute cost.
    cost = compute_cost(AL, train_labels)

    # Backward propagation.
    grads = L_model_backward(AL, train_labels, caches)

    # Update parameters.
    parameters = update_parameters(parameters, grads, learning_rate)


    # Print the cost every 100 training example
    if print_cost and i % 100 == 0:
        print("Cost after iteration %i: %f" % (i, cost))
    if print_cost and i % 10 == 0:
        costs.append(cost)

我的向后传播模型如下

def linear_backward(dZ, cache):
    A_prev, W, b = cache
    m = A_prev.shape[1]

    dW = 1 / m * np.dot(dZ, A_prev.T)
    db = 1 / m * np.sum(dZ, axis=1, keepdims=True)
    dA_prev = np.dot(W.T, dZ)

    assert (dA_prev.shape == A_prev.shape)
    assert (dW.shape == W.shape)
    assert (db.shape == b.shape)

    return dA_prev, dW, db


def linear_activation_backward(dA, cache, activation):

    linear_cache, activation_cache = cache

    if activation == "relu":
        dZ = relu_backward(dA, activation_cache)
        dA_prev, dW, db = linear_backward(dZ, linear_cache)

    elif activation == "sigmoid":
        dZ = sigmoid_backward(dA, activation_cache)
        dA_prev, dW, db = linear_backward(dZ, linear_cache)

    return dA_prev, dW, db


def L_model_backward(AL, Y, caches):
    grads = {}
    L = len(caches)  # the number of layers
    m = AL.shape[1]
    Y = Y.reshape(AL.shape)  # after this line, Y is the same shape as AL

    # Initializing the backpropagation
    dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))

    # Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "dAL, current_cache". Outputs: "grads["dAL-1"], grads["dWL"], grads["dbL"]
    current_cache = caches[L - 1]
    grads["dA" + str(L - 1)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, activation="sigmoid")

    # Loop from l=L-2 to l=0
    for l in reversed(range(L - 1)):
        # lth layer: (RELU -> LINEAR) gradients.
        # Inputs: "grads["dA" + str(l + 1)], current_cache". Outputs: "grads["dA" + str(l)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
        current_cache = caches[l]
        dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l + 1)], current_cache, activation="relu")
        grads["dA" + str(l)] = dA_prev_temp
        grads["dW" + str(l + 1)] = dW_temp
        grads["db" + str(l + 1)] = db_temp

    return grads

如果您还需要任何其他代码或我的任何东西,那么我很乐意提供。

0 个答案:

没有答案