Keras的损失是稳定的

时间:2018-03-02 12:19:37

标签: python tensorflow neural-network keras classification

我的样本有378行和15个collums。每一行都是一些价值观。我需要分为3个班级。

代码:

import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
import tensorflow as tf
import numpy
import pandas as pd
from keras.preprocessing.text import Tokenizer
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Input
from keras.models import Model
from keras import optimizers
tokenizer = Tokenizer()
def load_data_from_arrays(strings, labels, train_test_split=0.9):
    data_size = len(strings)
    test_size = int(data_size - round(data_size * train_test_split))
    print("Test size: {}".format(test_size))

    print("\nTraining set:")
    x_train = strings[test_size:]
    print("\t - x_train: {}".format(len(x_train)))
    y_train = labels[test_size:]
    print("\t - y_train: {}".format(len(y_train)))

    print("\nTesting set:")
    x_test = strings[:test_size]
    print("\t - x_test: {}".format(len(x_test)))
    y_test = labels[:test_size]
    print("\t - y_test: {}".format(len(y_test)))
    size = data_size - test_size

    return x_train, y_train, x_test, y_test, size, test_size
def create_labels(re_mass, size, num_class):
    train_mass = numpy.zeros([size, num_class])
    for i in range(size):
        for j in range(num_class):
            if j + 1 == re_mass[i]:
                train_mass[i][j] = 1
    return train_mass
hidden_size = 512
sgd = optimizers.SGD(lr=0.001, momentum=0.0, decay=0.0, nesterov=False)
df=pd.read_csv('/home/llirik/PycharmProjects/MegaBastard/Exp.csv',usecols = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],skiprows = [0],header=None)
d = df.values
l = pd.read_csv('/home/llirik/PycharmProjects/MegaBastard/Pred.csv',usecols = [0] ,header=None)
labels = l.values
data = numpy.float32(d)
labels = numpy.array(l)
X_train, y_train, X_test, y_test, size, test_size = load_data_from_arrays(data, labels, train_test_split=0.5)
print('Размерность X_train:', X_train.shape)
print('Размерность y_train:', y_train.shape)
epochs = 2
num_classes = 3
Y_train = create_labels(y_train, size, num_classes)
Y_test = create_labels(y_test, test_size, num_classes)
print('Размерность Y_train:', Y_train.shape)
inp = Input(shape=(15, ))
hidden_1 = Dense(hidden_size, activation='relu')(inp)
hidden_2 = Dense(hidden_size, activation='relu')(hidden_1)
out = Dense(num_classes, activation='softmax')(hidden_2)

model = Model(inputs=inp, outputs=out)
model.compile(loss='categorical_crossentropy',
              optimizer=sgd,
              metrics=['accuracy'])

print(model.summary())
history = model.fit(X_train, Y_train,
                    batch_size=1,
                    epochs=epochs,
                    verbose=1,
                    validation_split=0.1,
                )
score = model.evaluate(X_test, Y_test,
                       batch_size=1, verbose=1)

print()
print(u'Оценка теста: {}'.format(score[0]))
print(u'Оценка точности модели: {}'.format(score[1]))

我不知道validation_split是如何工作的,但我认为问题不在这里。

输出

Test size: 189

Training set:
     - x_train: 188
     - y_train: 189

Testing set:
     - x_test: 189
     - y_test: 189
Размерность X_train: (188, 15)
Размерность y_train: (189, 1)
Размерность Y_train: (188, 3)
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 15)                0         
_________________________________________________________________
dense_1 (Dense)              (None, 512)               8192      
_________________________________________________________________
dense_2 (Dense)              (None, 512)               262656    
_________________________________________________________________
dense_3 (Dense)              (None, 3)                 1539      
=================================================================
Total params: 272,387
Trainable params: 272,387
Non-trainable params: 0
None
Train on 169 samples, validate on 19 samples
Epoch 1/2

  1/169 [..............................] - ETA: 40s - loss: 0.0000e+00 - acc: 0.0000e+00
  7/169 [>.............................] - ETA: 6s - loss: 0.0000e+00 - acc: 0.4286     
 14/169 [=>............................] - ETA: 3s - loss: 0.0000e+00 - acc: 0.5714
 21/169 [==>...........................] - ETA: 2s - loss: 0.0000e+00 - acc: 0.4762
 28/169 [===>..........................] - ETA: 2s - loss: 0.0000e+00 - acc: 0.4643
 34/169 [=====>........................] - ETA: 2s - loss: 0.0000e+00 - acc: 0.5294
 41/169 [======>.......................] - ETA: 1s - loss: 0.0000e+00 - acc: 0.5122
 48/169 [=======>......................] - ETA: 1s - loss: 0.0000e+00 - acc: 0.5417
 55/169 [========>.....................] - ETA: 1s - loss: 0.0000e+00 - acc: 0.5273
 62/169 [==========>...................] - ETA: 1s - loss: 0.0000e+00 - acc: 0.5161
 69/169 [===========>..................] - ETA: 1s - loss: 0.0000e+00 - acc: 0.5072
 76/169 [============>.................] - ETA: 1s - loss: 0.0000e+00 - acc: 0.5000
 83/169 [=============>................] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4699
 90/169 [==============>...............] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4889
 97/169 [================>.............] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4948
104/169 [=================>............] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4904
111/169 [==================>...........] - ETA: 0s - loss: 0.0000e+00 - acc: 0.5045
118/169 [===================>..........] - ETA: 0s - loss: 0.0000e+00 - acc: 0.5000
125/169 [=====================>........] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4960
132/169 [======================>.......] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4924
139/169 [=======================>......] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4820
146/169 [========================>.....] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4863
153/169 [==========================>...] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4771
160/169 [===========================>..] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4688
166/169 [============================>.] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4819
169/169 [==============================] - 2s 10ms/step - loss: 0.0000e+00 - acc: 0.4911 - val_loss: 0.0000e+00 - val_acc: 0.3684
Epoch 2/2

  1/169 [..............................] - ETA: 1s - loss: 0.0000e+00 - acc: 1.0000
  8/169 [>.............................] - ETA: 1s - loss: 0.0000e+00 - acc: 0.6250
 15/169 [=>............................] - ETA: 1s - loss: 0.0000e+00 - acc: 0.5333
 22/169 [==>...........................] - ETA: 1s - loss: 0.0000e+00 - acc: 0.5455
 29/169 [====>.........................] - ETA: 1s - loss: 0.0000e+00 - acc: 0.4483
 36/169 [=====>........................] - ETA: 1s - loss: 0.0000e+00 - acc: 0.4444
 43/169 [======>.......................] - ETA: 1s - loss: 0.0000e+00 - acc: 0.4419
 50/169 [=======>......................] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4600
 57/169 [=========>....................] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4386
 64/169 [==========>...................] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4219
 71/169 [===========>..................] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4507
 78/169 [============>.................] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4744
 85/169 [==============>...............] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4706
 92/169 [===============>..............] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4891
 99/169 [================>.............] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4848
106/169 [=================>............] - ETA: 0s - loss: 0.0000e+00 - acc: 0.5000
112/169 [==================>...........] - ETA: 0s - loss: 0.0000e+00 - acc: 0.5000
119/169 [====================>.........] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4874
126/169 [=====================>........] - ETA: 0s - loss: 0.0000e+00 - acc: 0.5000
133/169 [======================>.......] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4962
140/169 [=======================>......] - ETA: 0s - loss: 0.0000e+00 - acc: 0.5000
147/169 [=========================>....] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4898
154/169 [==========================>...] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4870
161/169 [===========================>..] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4969
168/169 [============================>.] - ETA: 0s - loss: 0.0000e+00 - acc: 0.4940
169/169 [==============================] - 1s 8ms/step - loss: 0.0000e+00 - acc: 0.4911 - val_loss: 0.0000e+00 - val_acc: 0.3684

  1/189 [..............................] - ETA: 0s
 20/189 [==>...........................] - ETA: 0s
 39/189 [=====>........................] - ETA: 0s
 58/189 [========>.....................] - ETA: 0s
 78/189 [===========>..................] - ETA: 0s
 98/189 [==============>...............] - ETA: 0s
118/189 [=================>............] - ETA: 0s
138/189 [====================>.........] - ETA: 0s
158/189 [========================>.....] - ETA: 0s
178/189 [===========================>..] - ETA: 0s
189/189 [==============================] - 0s 3ms/step

Оценка теста: 10.745396931966146
Оценка точности модели: 0.14285714285714285

问题

由于某种原因,网络没有经过培训。我想,因为错误不会改变。为什么不改变?网络似乎并不复杂。

1 个答案:

答案 0 :(得分:0)

您的训练损失似乎为零,因此优化者无法做到这一点。鉴于你的训练数据很少,你可能会过度拟合而且模型无法推广到验证。