简单的神经网络误差先减小后增大

时间:2019-06-15 04:09:23

标签: python neural-network backpropagation

作为一个有趣的事情,在python中编写了一个神经网络,希望它能够正常工作,而不是使用工作更容易/更好的现有软件包。

在这一点上,我只是通过反向传播来调整输出节点的偏置。调整看起来像这样:

bias-=(真值-输出值)*(输出节点增量)*(学习率)

这是在backprop函数的最后一行完成的。

当对数据样本运行20次时,绝对误差先减小然后增大,然后继续无限期地增大,但速率减小。误差(真实值-输出值)最初是非常负的,并且随着每次连续的迭代而增加。

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

#getting sample data
df = pd.read_csv('City_of_Seattle_Staff_Demographics.csv')
df = df.sample(frac=0.1)
df = pd.get_dummies(df)
df = (df - df.min()) / (df.max() - df.min())
df.reset_index(inplace=True)
inputdata = np.array(df.drop(columns=['Hourly Rate', 'index'])) #index by inputs 2d array
outputdata = np.array(df[['Hourly Rate']])    #1 by index 2d array

#initialising variables
inn = len(inputdata[0])    #number of input nodes
hnn = 16    #number of hidden nodes
onn = len(outputdata[0])    #number of output nodes
inodes = np.empty((1, inn))    #value of input nodes
hi = np.empty((1, hnn))    #value of hidden nodes before logistic function is applied
oi = np.empty((1, onn))    #value of output nodes before logistic function is applied
ho = np.empty((1, hnn))    #value of hidden nodes after logistic function is applied
oo = np.empty((1, onn))    #value of output nodes after logistic function is applied
hdelta = np.empty((1, hnn))    #deltas of each node, given by delta(ho)
odelta = np.empty((1, onn))    #deltas of each node, given by delta(oo)
hbias = np.random.rand(1, hnn)    #node biases
obias = np.random.rand(1, onn)    #node biases
syn1 = np.random.rand(inn, hnn)    #synapse layers
syn2 = np.random.rand(hnn, onn)    #synapse layers

lrate = 0.01
error = 0.0

def sigmoid (x):
    return 1/(1+np.exp(-x))

def delta (x):
    return x*(1-x)

def forwardprop (index):
    global inodes, hi, oi, ho, oo, hbias, obias, syn1, syn2
    inodes = np.array([inputdata[index]])
    hi = np.matmul(inodes, syn1) + hbias
    ho = sigmoid(hi)
    oi = np.matmul(ho, syn2) + obias
    oo = sigmoid(oi)

def backprop (index):
    #backprop is only trying to adjust the output node bias
    global inodes, hi, oi, ho, oo, hbias, obias, syn1, syn2
    oo = np.array([outputdata[index]]) - oo
    odelta = delta(oo)
    hdelta = delta(ho)
    obias -= oo * odelta * lrate

def errorcalc ():
    global onn, oo, error
    for x in range(onn):
        error += oo[0][x]

def fullprop (index):
    forwardprop(index)
    backprop(index)
    errorcalc()

def fulliter ():    #iterate over whole sample
    global error
    error = 0
    for x in range(len(inputdata)):
        fullprop(x)
    print('error: ', error)

for x in range(20):
    fulliter()

我期望误差的绝对值减小,例如: -724,-267,-84,-21、12,-10、9,-7,... 相反,它是这样的: -724,-267,-84,-21、33、75、114、162、227、278、316 ... 376、378、379、380

0 个答案:

没有答案