简单非面向对象神经网络“跳跃”的成本

时间:2017-04-17 18:39:16

标签: python numpy machine-learning neural-network artificial-intelligence

我正在使用numpy和matrices在Python 3.4中构建神经网络的草图,以学习简单的XOR。 我的记谱法如下:

a 是神经元的活动

z 是神经元的输入

W 是一个权重矩阵,大小为R ^ {前一层神经元的数量} x {下一层神经元的数量}

B 是偏差值的向量

在python中实现一个非常简单的网络之后,当只训练一个输入向量时,一切正常。然而,当对所有四个XOR训练样例进行训练时,误差函数显示出一种非常奇怪的行为(见图),并且网络的输出总是大约为0.5。 改变网络规模,学习速度或培训时期似乎没有帮助。

Cost J while only training on one training example 成本J虽然只训练一个训练样例

Cost J while training with all training examples 使用所有培训示例进行培训时的成本J

这是网络的代码:

import numpy as np
import time
import matplotlib.pyplot as plt


Js = []
start = time.time()
np.random.seed(2)


#Sigmoid        
def activation(x, derivative = False):
    if(derivative):
        a = activation(x)
        return a * (1 - a)
    else:
        return 1/(1+np.exp(-x))

def cost(output, target):
    return (1/2) * np.sum((target - output)**2)


INPUTS = np.array([
    [0, 1],
    [1, 0],
    [0, 0],
    [1, 1],
])
TARGET = np.array([
    [1],
    [1],
    [0],
    [0],
])

"Hyper-Parameters"
# Layer Structure
LAYER = [2, 3, 1]
LEARNING_RATE = 0.1
ITERATIONS = int(1e3)

# Init Weights
W1 = np.random.rand(LAYER[0], LAYER[1])
W2 = np.random.rand(LAYER[1], LAYER[2])

# Init Biases
B1 = np.random.rand(LAYER[1], 1)
B2 = np.random.rand(LAYER[2], 1)

for i in range(0, ITERATIONS):
    exampleIndex = i % len(INPUTS)
    #exampleIndex = 2
    "Forward Pass"
    # Layer One Activity (Input layer)
    A0 = np.transpose(INPUTS[exampleIndex:exampleIndex+1])

    # Layer Two Activity (Hidden Layer)
    Z1 = np.dot(np.transpose(W1), A0) + B1
    A1 = activation(Z1)

    # Layer Three Activity (Output Layer)
    Z2 = np.dot(np.transpose(W2), A1) + B2
    A2 = activation(Z2)

    # Output
    O = A2

    # Cost J

    # Target Vector T
    T = np.transpose(TARGET[exampleIndex:exampleIndex+1])
    J = cost(O, T)
    Js.append(J)

    print("J = {}".format(J))
    print("I = {}, O = {}".format(A0, O))

    "Backward Pass"

    # Calculate Delta of output layer
    D2 = (O - T) * activation(Z2, True)

    # Calculate Delta of hidden layer
    D1 = np.dot(W2, D2) * activation(Z1, True)

    # Calculate Derivatives w.r.t. W2
    DerW2 = np.dot(A1, np.transpose(D2))
    # Calculate Derivatives w.r.t. W1
    DerW1 = np.dot(A0, np.transpose(D1))

    # Calculate Derivatives w.r.t. B2
    DerB2 = D2
    # Calculate Derivatives w.r.t. B1
    DerB1 = D1

    "Update Weights and Biases"

    W1 -= LEARNING_RATE * DerW1
    B1 -= LEARNING_RATE * DerB1

    W2 -= LEARNING_RATE * DerW2
    B2 -= LEARNING_RATE * DerB2

# Show prediction

print("Time elapsed {}s".format(time.time() - start))    
plt.plot(Js)
plt.ylabel("Cost J")
plt.xlabel("Iterations")
plt.show()

在我的实施中出现这种奇怪行为的原因是什么?

1 个答案:

答案 0 :(得分:2)

我认为您的成本函数正在跳跃,因为您在每个样本后执行重量更新。但是,您的网络正在训练正确的行为:

479997
J = 4.7222501603409765e-05
I = [[1]
 [0]], O = [[ 0.99028172]]
T = [[1]]
479998
J = 7.3205311398742e-05
I = [[0]
 [0]], O = [[ 0.01210003]]
T = [[0]]
479999
J = 4.577485181547362e-05
I = [[1]
 [1]], O = [[ 0.00956816]]
T = [[0]]
480000
J = 4.726257702199439e-05
I = [[0]
 [1]], O = [[ 0.9902776]]
T = [[1]]

成本函数显示了一些有趣的行为:训练过程达到了成本函数中的跳跃变得非常小的程度。 您可以使用下面的代码重现这一点(我只做了一些细微的修改;请注意我训练了更多的时代):

import numpy as np
import time
import matplotlib.pyplot as plt


Js = []
start = time.time()
np.random.seed(2)


#Sigmoid        
def activation(x, derivative = False):
    if(derivative):
        a = activation(x)
        return a * (1 - a)
    else:
        return 1/(1+np.exp(-x))

def cost(output, target):
    return (1/2) * np.sum((target - output)**2)


INPUTS = np.array([[0, 1],[1, 0],[0, 0],[1, 1]])
TARGET = np.array([[1],[1],[0],[0]])

"Hyper-Parameters"
# Layer Structure
LAYER = [2, 3, 1]
LEARNING_RATE = 0.1
ITERATIONS = int(5e5)

# Init Weights
W1 = np.random.rand(LAYER[0], LAYER[1])
W2 = np.random.rand(LAYER[1], LAYER[2])

# Init Biases
B1 = np.random.rand(LAYER[1], 1)
B2 = np.random.rand(LAYER[2], 1)

for i in range(0, ITERATIONS):
    exampleIndex = i % len(INPUTS)
    # exampleIndex = 2
    "Forward Pass"
    # Layer One Activity (Input layer)
    A0 = np.transpose(INPUTS[exampleIndex:exampleIndex+1])

    # Layer Two Activity (Hidden Layer)
    Z1 = np.dot(np.transpose(W1), A0) + B1
    A1 = activation(Z1)

    # Layer Three Activity (Output Layer)
    Z2 = np.dot(np.transpose(W2), A1) + B2
    A2 = activation(Z2)

    # Output
    O = A2

    # Cost J

    # Target Vector T
    T = np.transpose(TARGET[exampleIndex:exampleIndex+1])
    J = cost(O, T)
    Js.append(J)

    # print("J = {}".format(J))
    # print("I = {}, O = {}".format(A0, O))
    # print("T = {}".format(T))
    if ((i+3) % 20000 == 0):
        print(i)
        print("J = {}".format(J))
        print("I = {}, O = {}".format(A0, O))
        print("T = {}".format(T))
    if ((i+2) % 20000 == 0):
        print(i)
        print("J = {}".format(J))
        print("I = {}, O = {}".format(A0, O))
        print("T = {}".format(T))
    if ((i+1) % 20000 == 0):
        print(i)
        print("J = {}".format(J))
        print("I = {}, O = {}".format(A0, O))
        print("T = {}".format(T))
    if (i % 20000 == 0):
        print(i)
        print("J = {}".format(J))
        print("I = {}, O = {}".format(A0, O))
        print("T = {}".format(T))

    "Backward Pass"

    # Calculate Delta of output layer
    D2 = (O - T) * activation(Z2, True)

    # Calculate Delta of hidden layer
    D1 = np.dot(W2, D2) * activation(Z1, True)

    # Calculate Derivatives w.r.t. W2
    DerW2 = np.dot(A1, np.transpose(D2))
    # Calculate Derivatives w.r.t. W1
    DerW1 = np.dot(A0, np.transpose(D1))

    # Calculate Derivatives w.r.t. B2
    DerB2 = D2
    # Calculate Derivatives w.r.t. B1
    DerB1 = D1

    "Update Weights and Biases"

    W1 -= LEARNING_RATE * DerW1
    B1 -= LEARNING_RATE * DerB1

    W2 -= LEARNING_RATE * DerW2
    B2 -= LEARNING_RATE * DerB2

# Show prediction

print("Time elapsed {}s".format(time.time() - start))    
plt.plot(Js)
plt.ylabel("Cost J")
plt.xlabel("Iterations")
plt.savefig('cost.pdf')
plt.show()

为了减少成本函数的波动,通常在执行更新(某些平均更新)之前使用多个数据样本,但我发现在仅包含四个不同训练事件的集合中这很困难。 因此,总结这个相当长的答案:您的成本函数会跳跃,因为它是针对每个示例计算的,而不是针对多个示例的平均值。但是,网络输出非常符合XOR功能的分布,因此您无需更改它。