在Python中实现回归/分类神经网络

时间:2017-12-04 03:25:40

标签: python neural-network logistic-regression

问题背景

我正在尝试用Python学习神经网络,&我已经开发了基于NN的Logistic回归的实现。

这是代码 -

import numpy as np

# Input array
X = np.array([[1, 0, 1, 0], [1, 0, 1, 1], [0, 1, 0, 1]])

# Output
y = np.array([[1], [1], [0]])


# Sigmoid Function
def sigmoid(x):
    return 1 / (1 + np.exp(-x))


# Derivative of Sigmoid Function
def ddx_sigmoid(x):
    return x * (1 - x)


#####   Initialization - BEGIN  #####


# Setting training iterations
iterations_max = 500000

# Learning Rate
alpha = 0.5

# Number of Neruons in Input Layer = Number of Features in the data set
inputlayer_neurons = X.shape[1]

# Number of Neurons in the Hidden Layer
hiddenlayer_neurons = 3  # number of hidden layers neurons

# Number of Neurons at the Output Layer
output_neurons = 1  # number of neurons at output layer

# weight and bias initialization
wh = np.random.uniform(size=(inputlayer_neurons, hiddenlayer_neurons))
bh = np.random.uniform(size=(1, hiddenlayer_neurons))
wout = np.random.uniform(size=(hiddenlayer_neurons, output_neurons))
bout = np.random.uniform(size=(1, output_neurons))

#####   Initialization - END  #####

# Printing of shapes

print "\nShape X: ", X.shape, "\nShape Y: ", y.shape
print "\nShape WH: ", wh.shape, "\nShape BH: ", bh.shape, "\nShape Wout: ", wout.shape, "\nShape Bout: ", bout.shape

# Printing of Values
print "\nwh:\n", wh, "\n\nbh: ", bh, "\n\nwout:\n", wout, "\n\nbout: ", bout

#####   TRAINING - BEGIN  #####
for i in range(iterations_max):
    #####   Forward Propagation - BEGIN   #####

    # Input to Hidden Layer = (Dot Product of Input Layer and Weights) + Bias
    hidden_layer_input = (np.dot(X, wh)) + bh

    # Activation of input to Hidden Layer by using Sigmoid Function
    hiddenlayer_activations = sigmoid(hidden_layer_input)

    # Input to Output Layer = (Dot Product of Hidden Layer Activations and Weights) + Bias
    output_layer_input = np.dot(hiddenlayer_activations, wout) + bout

    # Activation of input to Output Layer by using Sigmoid Function
    output = sigmoid(output_layer_input)

    #####   Forward Propagation - END #####

    #####   Backward Propagation - BEGIN   #####

    E = y - output

    slope_output_layer = ddx_sigmoid(output)
    slope_hidden_layer = ddx_sigmoid(hiddenlayer_activations)

    d_output = E * slope_output_layer

    Error_at_hidden_layer = d_output.dot(wout.T)
    d_hiddenlayer = Error_at_hidden_layer * slope_hidden_layer

    wout += hiddenlayer_activations.T.dot(d_output) * alpha
    bout += np.sum(d_output, axis=0, keepdims=True) * alpha

    wh += X.T.dot(d_hiddenlayer) * alpha
    bh += np.sum(d_hiddenlayer, axis=0, keepdims=True) * alpha

    #####   Backward Propagation - END   #####
#####   TRAINING - END  #####

print "\nOutput is:\n", output

在输出为二进制(0,1)的情况下,此代码非常有效。我想,这是因为我正在使用的sigmoid函数。

问题

现在,我想缩放此代码,以便它也可以处理线性回归。

众所周知,scikit库有一些预加载的数据集,可用于分类和回归。

我希望我的NN能够训练和测试diabetes数据集。

考虑到这一点,我修改了我的代码如下 -

import numpy as np
from sklearn import datasets

# Sigmoid Function
def sigmoid(x):
    return 1 / (1 + np.exp(-x))


# Derivative of Sigmoid Function
def ddx_sigmoid(x):
    return x * (1 - x)

# Load Data
def load_data():
    diabetes_data = datasets.load_diabetes()
    return diabetes_data

input_data = load_data()

X = input_data.data

# Reshape Output
y = input_data.target
y = y.reshape(len(y), 1)

iterations_max = 1000

# Learning Rate
alpha = 0.5

# Number of Neruons in Input Layer = Number of Features in the data set
inputlayer_neurons = X.shape[1]

# Number of Neurons in the Hidden Layer
hiddenlayer_neurons = 5  # number of hidden layers neurons

# Number of Neurons at the Output Layer
output_neurons = 3  # number of neurons at output layer

# weight and bias initialization
wh = np.random.uniform(size=(inputlayer_neurons, hiddenlayer_neurons))
bh = np.random.uniform(size=(1, hiddenlayer_neurons))
wout = np.random.uniform(size=(hiddenlayer_neurons, output_neurons))
bout = np.random.uniform(size=(1, output_neurons))


#####   TRAINING - BEGIN  #####
for i in range(iterations_max):
    #####   Forward Propagation - BEGIN   #####

    # Input to Hidden Layer = (Dot Product of Input Layer and Weights) + Bias
    hidden_layer_input = (np.dot(X, wh)) + bh

    # Activation of input to Hidden Layer by using Sigmoid Function
    hiddenlayer_activations = sigmoid(hidden_layer_input)

    # Input to Output Layer = (Dot Product of Hidden Layer Activations and Weights) + Bias
    output_layer_input = np.dot(hiddenlayer_activations, wout) + bout

    # Activation of input to Output Layer by using Sigmoid Function
    output = sigmoid(output_layer_input)

    #####   Forward Propagation - END #####

    #####   Backward Propagation - BEGIN   #####

    E = y - output

    slope_output_layer = ddx_sigmoid(output)
    slope_hidden_layer = ddx_sigmoid(hiddenlayer_activations)

    d_output = E * slope_output_layer

    Error_at_hidden_layer = d_output.dot(wout.T)
    d_hiddenlayer = Error_at_hidden_layer * slope_hidden_layer

    wout += hiddenlayer_activations.T.dot(d_output) * alpha
    bout += np.sum(d_output, axis=0, keepdims=True) * alpha

    wh += X.T.dot(d_hiddenlayer) * alpha
    bh += np.sum(d_hiddenlayer, axis=0, keepdims=True) * alpha

    #####   Backward Propagation - END   #####
#####   TRAINING - END  #####

print "\nOutput is:\n", output

此代码的输出为 -

Output is:
[[ 1.  1.  1.]
 [ 1.  1.  1.]
 [ 1.  1.  1.]
 ..., 
 [ 1.  1.  1.]
 [ 1.  1.  1.]
 [ 1.  1.  1.]]

显然,我在基础知识的某个地方搞砸了。

这是因为我使用sigmoid函数作为隐藏和输出层吗?

我应该使用什么样的功能才能获得有效的输出,可以用来有效地训练我的NN?

到目前为止的努力

我尝试使用TANH功能,SOFTPLUS功能作为两层的激活,但没有任何成功。

有人可以帮忙吗?

我尝试使用谷歌搜索,但那里的解释非常复杂。

帮助!

1 个答案:

答案 0 :(得分:0)

您应该尝试删除输出上的sigmoid函数。

对于线性回归,输出范围可能较大,而sigmoid或tanh函数的输出为[0,1]或[-1,1],这使得误差函数的最小化不可能。

=======更新======

我尝试在张量流中完成它,它是它的核心部分:

w = tf.Variable(tf.truncated_normal([features, FLAGS.hidden_unit], stddev=0.35))
b = tf.Variable(tf.zeros([FLAGS.hidden_unit]))

# right way
y = tf.reduce_sum(tf.matmul(x, w) + b, 1)

# wrong way: as sigmoid output in [-1, 1]
# y = tf.sigmoid(tf.matmul(x, w) + b)

mse_loss = tf.reduce_sum(tf.pow(y - y_, 2) / 2