我正在尝试训练这个神经网络来预测某些数据。 我在一个小数据集(大约100条记录)上尝试过它,它就像一个魅力。然后我插入新的数据集,我发现NN收敛到0输出,误差大致收敛到正例数和例子总数之间的比率。
我的数据集由是/否功能(1.0 / 0.0)组成,基本事实也是/否。
我的假设:
1)输出为0的局部最小值(但我尝试了许多学习率和初始权重值,似乎总是在那里收敛)
2)我的体重更新错了(但对我来说很好看)
3)它只是一个输出缩放问题。我试图缩放输出(即输出/最大(输出)和输出/平均值(输出))但结果并不好,您可以在下面提供的代码中看到。我应该以不同的方式扩展它吗? SOFTMAX?
这是代码:
import pandas as pd
import numpy as np
import pickle
import random
from collections import defaultdict
alpha = 0.1
N_LAYERS = 10
N_ITER = 10
#N_FEATURES = 8
INIT_SCALE = 1.0
train = pd.read_csv("./data/prediction.csv")
y = train['y_true'].as_matrix()
y = np.vstack(y).astype(float)
ytest = y[18000:]
y = y[:18000]
X = train.drop(['y_true'], axis = 1).as_matrix()
Xtest = X[18000:].astype(float)
X = X[:18000]
def tanh(x,deriv=False):
if(deriv==True):
return (1 - np.tanh(x)**2) * alpha
else:
return np.tanh(x)
def sigmoid(x,deriv=False):
if(deriv==True):
return x*(1-x)
else:
return 1/(1+np.exp(-x))
def relu(x,deriv=False):
if(deriv==True):
return 0.01 + 0.99*(x>0)
else:
return 0.01*x + 0.99*x*(x>0)
np.random.seed()
syn = defaultdict(np.array)
for i in range(N_LAYERS-1):
syn[i] = INIT_SCALE * np.random.random((len(X[0]),len(X[0]))) - INIT_SCALE/2
syn[N_LAYERS-1] = INIT_SCALE * np.random.random((len(X[0]),1)) - INIT_SCALE/2
l = defaultdict(np.array)
delta = defaultdict(np.array)
for j in xrange(N_ITER):
l[0] = X
for i in range(1,N_LAYERS+1):
l[i] = relu(np.dot(l[i-1],syn[i-1]))
error = (y - l[N_LAYERS])
e = np.mean(np.abs(error))
if (j% 1) == 0:
print "\nIteration " + str(j) + " of " + str(N_ITER)
print "Error: " + str(e)
delta[N_LAYERS] = error*relu(l[N_LAYERS],deriv=True) * alpha
for i in range(N_LAYERS-1,0,-1):
error = delta[i+1].dot(syn[i].T)
delta[i] = error*relu(l[i],deriv=True) * alpha
for i in range(N_LAYERS):
syn[i] += l[i].T.dot(delta[i+1])
pickle.dump(syn, open('neural_weights.pkl', 'wb'))
# TESTING with f1-measure
# RECALL = TRUE POSITIVES / ( TRUE POSITIVES + FALSE NEGATIVES)
# PRECISION = TRUE POSITIVES / (TRUE POSITIVES + FALSE POSITIVES)
l[0] = Xtest
for i in range(1,N_LAYERS+1):
l[i] = relu(np.dot(l[i-1],syn[i-1]))
out = l[N_LAYERS]/max(l[N_LAYERS])
tp = float(0)
fp = float(0)
fn = float(0)
tn = float(0)
for i in l[N_LAYERS][:50]:
print i
for i in range(len(ytest)):
if out[i] > 0.5 and ytest[i] == 1:
tp += 1
if out[i] <= 0.5 and ytest[i] == 1:
fn += 1
if out[i] > 0.5 and ytest[i] == 0:
fp += 1
if out[i] <= 0.5 and ytest[i] == 0:
tn += 1
print "tp: " + str(tp)
print "fp: " + str(fp)
print "tn: " + str(tn)
print "fn: " + str(fn)
print "\nprecision: " + str(tp/(tp + fp))
print "recall: " + str(tp/(tp + fn))
f1 = 2 * tp /(2 * tp + fn + fp)
print "\nf1-measure:" + str(f1)
这是输出:
Iteration 0 of 10
Error: 0.222500767998
Iteration 1 of 10
Error: 0.222500771157
Iteration 2 of 10
Error: 0.222500774321
Iteration 3 of 10
Error: 0.22250077749
Iteration 4 of 10
Error: 0.222500780663
Iteration 5 of 10
Error: 0.222500783841
Iteration 6 of 10
Error: 0.222500787024
Iteration 7 of 10
Error: 0.222500790212
Iteration 8 of 10
Error: 0.222500793405
Iteration 9 of 10
Error: 0.222500796602
[ 0.]
[ 0.]
[ 5.58610895e-06]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 4.62182626e-06]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 5.58610895e-06]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 4.62182626e-06]
[ 0.]
[ 0.]
[ 5.04501079e-10]
[ 5.58610895e-06]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 5.04501079e-10]
[ 0.]
[ 0.]
[ 4.62182626e-06]
[ 0.]
[ 5.58610895e-06]
[ 0.]
[ 0.]
[ 0.]
[ 5.58610895e-06]
[ 0.]
[ 0.]
[ 0.]
[ 5.58610895e-06]
[ 0.]
[ 1.31432294e-05]
tp: 28.0
fp: 119.0
tn: 5537.0
fn: 1550.0
precision: 0.190476190476
recall: 0.0177439797212
f1-measure:0.0324637681159
答案 0 :(得分:0)
根据您的模型,您的网络不太可能需要10层才能收敛。
尝试使用更多隐藏节点的3层网络。对于大多数前馈问题,您只需要1个隐藏层即可有效收敛。
Deep NN的训练要比浅的训练困难得多。
像其他人一样,你说学习率应该小得多[.01,.3]是一个不错的范围,另外迭代次数需要更大。
10层太多了。