我开始玩Keras和简单的神经网络。问题是关于正确性以及提高准确性的下一步措施。
考虑http://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients中的数据集 它具有30K示例和24个功能,目的是预测是否存在默认值。我创建了一个简单的网络,输入层有24个输入源,16个隐藏,最后一个softmax层。损失是binary_crossentropy。测试为10%,validation_split为20%
一个输入行是
1,20000,2,2,1,24,2,2,-1,-1,-2,-2,3913,3102,689,0,0,0,0,689,0,0,0,0,1
代码是
import pandas as pd
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
from keras.optimizers import SGD
from sklearn.cross_validation import train_test_split
from keras.utils import np_utils, generic_utils
# load training in a panda dataframe and skip first line
train = pd.read_csv('./data/defaulCC.csv', header=1)
# split X, y
X = train.iloc[:,:-1].values
y = train.iloc[:,-1:].values
dimof_input = X.shape[1]
dimof_output = len(set(y.flat))
print('dimof_input: ', dimof_input)
print('dimof_output: ', dimof_output)
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.9, random_state=0)
y_train, y_test = [np_utils.to_categorical(x) for x in (y_train, y_test)]
print X_train.shape, X_test.shape, y_train.shape, y_test.shape
# Set constants
batch_size = 128
dimof_middle = 16
dropout = 0.2
countof_epoch = 100
verbose = 1
optimizer='sgd'
print('batch_size: ', batch_size)
print('dimof_middle: ', dimof_middle)
print('dropout: ', dropout)
print('countof_epoch: ', countof_epoch)
print('verbose: ', verbose)
print ('optimizer: ', optimizer)
# this network has dimof_input n the input layer
# dimof_output in the output layer
# dimof_middle in the hidden layer
model = Sequential()
model.add(Dense(dimof_middle, input_dim=dimof_input, activation='relu'))
model.add(Dropout(dropout))
#model.add(Dense(dimof_middle, activation='relu'))
#model.add(Dropout(dropout))
model.add(Dense(dimof_output, activation='softmax'))
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy', optimizer=sgd, metrics=['accuracy'])
# Train
model.fit(
X_train, y_train,
validation_split=0.2,
batch_size=batch_size, nb_epoch=countof_epoch, verbose=verbose)
# Evaluate
loss, accuracy = model.evaluate(X_test, y_test, verbose=verbose)
print('loss: ', loss)
print('accuracy: ', accuracy)
print()
输出
Using Theano backend.
('dimof_input: ', 24)
('dimof_output: ', 2)
(27000, 24) (3000, 24) (27000, 2) (3000, 2)
('batch_size: ', 128)
('dimof_middle: ', 16)
('dropout: ', 0.2)
('countof_epoch: ', 100)
('verbose: ', 1)
('optimizer: ', 'sgd')
Train on 27000 samples, validate on 5400 samples
Epoch 1/100
27000/27000 [==============================] - 0s - loss: 3.6371 - acc: 0.7727 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 2/100
27000/27000 [==============================] - 0s - loss: 3.5866 - acc: 0.7757 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 3/100
27000/27000 [==============================] - 0s - loss: 3.6024 - acc: 0.7750 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 4/100
27000/27000 [==============================] - 0s - loss: 3.5859 - acc: 0.7758 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 5/100
27000/27000 [==============================] - 0s - loss: 3.5854 - acc: 0.7761 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 6/100
27000/27000 [==============================] - 0s - loss: 3.5883 - acc: 0.7760 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 7/100
27000/27000 [==============================] - 0s - loss: 3.5855 - acc: 0.7761 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 8/100
27000/27000 [==============================] - 0s - loss: 3.5854 - acc: 0.7761 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 9/100
27000/27000 [==============================] - 0s - loss: 3.5847 - acc: 0.7762 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 10/100
27000/27000 [==============================] - 0s - loss: 3.5900 - acc: 0.7760 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 11/100
27000/27000 [==============================] - 0s - loss: 3.5689 - acc: 0.7773 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 12/100
27000/27000 [==============================] - 0s - loss: 3.5665 - acc: 0.7775 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 13/100
27000/27000 [==============================] - 0s - loss: 3.5653 - acc: 0.7776 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 14/100
27000/27000 [==============================] - 0s - loss: 3.5701 - acc: 0.7773 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 15/100
27000/27000 [==============================] - 0s - loss: 3.5582 - acc: 0.7780 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 16/100
27000/27000 [==============================] - 0s - loss: 3.5682 - acc: 0.7774 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 17/100
27000/27000 [==============================] - 0s - loss: 3.5665 - acc: 0.7775 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 18/100
27000/27000 [==============================] - 0s - loss: 3.5648 - acc: 0.7776 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 19/100
27000/27000 [==============================] - 0s - loss: 3.5636 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 20/100
27000/27000 [==============================] - 0s - loss: 3.5700 - acc: 0.7772 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 21/100
27000/27000 [==============================] - 0s - loss: 3.5597 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 22/100
27000/27000 [==============================] - 0s - loss: 3.5597 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 23/100
27000/27000 [==============================] - 0s - loss: 3.5603 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 24/100
27000/27000 [==============================] - 0s - loss: 3.5590 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 25/100
27000/27000 [==============================] - 0s - loss: 3.5573 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 26/100
27000/27000 [==============================] - 0s - loss: 3.5590 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 27/100
27000/27000 [==============================] - 0s - loss: 3.5621 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 28/100
27000/27000 [==============================] - 0s - loss: 3.5581 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 29/100
27000/27000 [==============================] - 0s - loss: 3.5576 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 30/100
27000/27000 [==============================] - 0s - loss: 3.5590 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 31/100
27000/27000 [==============================] - 0s - loss: 3.5575 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 32/100
27000/27000 [==============================] - 0s - loss: 3.5598 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 33/100
27000/27000 [==============================] - 0s - loss: 3.5604 - acc: 0.7776 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 34/100
27000/27000 [==============================] - 0s - loss: 3.5609 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 35/100
27000/27000 [==============================] - 0s - loss: 3.5598 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 36/100
27000/27000 [==============================] - 0s - loss: 3.5575 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 37/100
27000/27000 [==============================] - 0s - loss: 3.5592 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 38/100
27000/27000 [==============================] - 0s - loss: 3.5603 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 39/100
27000/27000 [==============================] - 0s - loss: 3.5637 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 40/100
27000/27000 [==============================] - 0s - loss: 3.5603 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 41/100
27000/27000 [==============================] - 0s - loss: 3.5584 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 42/100
27000/27000 [==============================] - 0s - loss: 3.5564 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 43/100
27000/27000 [==============================] - 0s - loss: 3.5603 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 44/100
27000/27000 [==============================] - 0s - loss: 3.5576 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 45/100
27000/27000 [==============================] - 0s - loss: 3.5603 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 46/100
27000/27000 [==============================] - 0s - loss: 3.5595 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 47/100
27000/27000 [==============================] - 0s - loss: 3.5581 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 48/100
27000/27000 [==============================] - 0s - loss: 3.5598 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 49/100
27000/27000 [==============================] - 0s - loss: 3.5576 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 50/100
27000/27000 [==============================] - 0s - loss: 3.5610 - acc: 0.7776 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 51/100
27000/27000 [==============================] - 0s - loss: 3.5616 - acc: 0.7776 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 52/100
27000/27000 [==============================] - 0s - loss: 3.5598 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 53/100
27000/27000 [==============================] - 0s - loss: 3.5569 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 54/100
27000/27000 [==============================] - 0s - loss: 3.5589 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 55/100
27000/27000 [==============================] - 0s - loss: 3.5569 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 56/100
27000/27000 [==============================] - 0s - loss: 3.5563 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 57/100
27000/27000 [==============================] - 0s - loss: 3.5598 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 58/100
27000/27000 [==============================] - 0s - loss: 3.5607 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 59/100
27000/27000 [==============================] - 0s - loss: 3.5611 - acc: 0.7776 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 60/100
27000/27000 [==============================] - 0s - loss: 3.5558 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 61/100
27000/27000 [==============================] - 0s - loss: 3.5620 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 62/100
27000/27000 [==============================] - 0s - loss: 3.5592 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 63/100
27000/27000 [==============================] - 0s - loss: 3.5608 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 64/100
27000/27000 [==============================] - 0s - loss: 3.5587 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 65/100
27000/27000 [==============================] - 0s - loss: 3.5586 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 66/100
27000/27000 [==============================] - 0s - loss: 3.5608 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 67/100
27000/27000 [==============================] - 0s - loss: 3.5605 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 68/100
27000/27000 [==============================] - 0s - loss: 3.5598 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 69/100
27000/27000 [==============================] - 0s - loss: 3.5621 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 70/100
27000/27000 [==============================] - 0s - loss: 3.5607 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 71/100
27000/27000 [==============================] - 0s - loss: 3.5609 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 72/100
27000/27000 [==============================] - 0s - loss: 3.5603 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 73/100
27000/27000 [==============================] - 0s - loss: 3.5586 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 74/100
27000/27000 [==============================] - 0s - loss: 3.5603 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 75/100
27000/27000 [==============================] - 0s - loss: 3.5625 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 76/100
27000/27000 [==============================] - 0s - loss: 3.5573 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 77/100
27000/27000 [==============================] - 0s - loss: 3.5590 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 78/100
27000/27000 [==============================] - 0s - loss: 3.5608 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 79/100
27000/27000 [==============================] - 0s - loss: 3.5613 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 80/100
27000/27000 [==============================] - 0s - loss: 3.5564 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 81/100
27000/27000 [==============================] - 0s - loss: 3.5638 - acc: 0.7776 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 82/100
27000/27000 [==============================] - 0s - loss: 3.5609 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 83/100
27000/27000 [==============================] - 0s - loss: 3.5591 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 84/100
27000/27000 [==============================] - 0s - loss: 3.5599 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 85/100
27000/27000 [==============================] - 0s - loss: 3.5615 - acc: 0.7776 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 86/100
27000/27000 [==============================] - 0s - loss: 3.5616 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 87/100
27000/27000 [==============================] - 0s - loss: 3.5616 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 88/100
27000/27000 [==============================] - 0s - loss: 3.5586 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 89/100
27000/27000 [==============================] - 0s - loss: 3.5615 - acc: 0.7777 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 90/100
27000/27000 [==============================] - 0s - loss: 3.5581 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 91/100
27000/27000 [==============================] - 0s - loss: 3.5580 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 92/100
27000/27000 [==============================] - 0s - loss: 3.5586 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 93/100
27000/27000 [==============================] - 0s - loss: 3.5611 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 94/100
27000/27000 [==============================] - 0s - loss: 3.5589 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 95/100
27000/27000 [==============================] - 0s - loss: 3.5595 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 96/100
27000/27000 [==============================] - 0s - loss: 3.5623 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 97/100
27000/27000 [==============================] - 0s - loss: 3.5623 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 98/100
27000/27000 [==============================] - 0s - loss: 3.5605 - acc: 0.7779 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 99/100
27000/27000 [==============================] - 0s - loss: 3.5612 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
Epoch 100/100
27000/27000 [==============================] - 0s - loss: 3.5612 - acc: 0.7778 - val_loss: 3.6157 - val_acc: 0.7744
3000/3000 [==============================] - 0s
('loss: ', 3.4197844168345135)
('accuracy: ', 0.78666666682561237)
()
答案 0 :(得分:1)
损失没有减少,这是一个问题。你可以做几点改进:
规范化您的数据。在您的具体示例中,所有功能都有不同的范围,因此规范化它们是神经网络成功学习的必要条件。为了标准化,您可以计算每个特征的平均值和标准差,并减去平均值并除以标准偏差,这将使特征保持在近似[-1,1]范围内。 Min-Max规范化也应该有效。
规范化后,您可以通过添加更多图层来增加模型的容量。您也可以尝试使用不同的非线性,如sigmoid。
增加正规化。您正在使用p = 0.2的Dropout,并且可以将其增加到p = 0.5。或者,您可以使用批量标准化。
这个答案很通用,但这个建议应该适用于多种数据。