Keras的多个班级

时间:2018-01-30 19:02:51

标签: deep-learning keras

提前感谢任何需要时间回答此问题的人。我正在学习Keras并遇到一个问题,我有3个类,测试集精度上升到0.6667然后在50个时期的那个确切数字上停顿。准确度也高于正确的准确度。当我只有2节课时,这很好。

我在这里做错了什么?

import pandas as pd
import numpy as np
import keras.utils

#Create train and test data
def create_Xt_Yt(X, y, percentage=0.8):
    p = int(len(X) * percentage)
    X_train = X[0:p]
    Y_train = y[0:p]

    X_test = X[p:]
    Y_test = y[p:]

    return X_train, X_test, Y_train, Y_test

df = pd.read_csv('data.csv', parse_dates=['Date'])
df.set_index(['Date'], inplace=True)
df.drop(['Volume'],1, inplace=True)
df.dropna(inplace=True)
data = df.loc[:, 'AMD-close'].tolist()

window = 30
forecast = 3
forecast_target_long = 1.015 
forecast_target_short= 0.985

x_holder = []
y_holder = []

for i in range(len(data)):
    try:
        x_class = data[i:i+window]
        y_class = data[i+window+forecast]


        window_last_price = data[i+window]
        forecast_price = y_class

        if forecast_price > (window_last_price*forecast_target_long):
            y_class = [1]
        elif forecast_price < (window_last_price*forecast_target_short):
            y_class = [-1]
        else:
            y_class = [0]

        y_holder.append(y_class) 
        x_holder.append(x_class)

    except Exception as e:
        print(e)
        break 

normalize = [(np.array(i) - np.mean(i)) / np.std(i) for i in x_holder] 
y_holder = keras.utils.to_categorical(y_holder, 3)
x_holder, y_holder = np.array(x_holder), np.array(y_holder)

X_train, X_test, Y_train, Y_test = create_Xt_Yt(x_holder, y_holder)

这是模型:

from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.normalization import BatchNormalization
from keras.optimizers import RMSprop, Adam, SGD, Nadam
from keras.callbacks import ReduceLROnPlateau
from keras import regularizers
from keras import losses

model = Sequential()

model.add(Dense(64, input_dim=window, activity_regularizer=regularizers.l2(0.01)))
model.add(BatchNormalization())
model.add(Activation('relu'))

model.add(Dropout(0.5))
model.add(Dense(16, activity_regularizer=regularizers.l2(0.01)))
model.add(BatchNormalization())
model.add(Activation('relu'))

model.add(Dense(3))
model.add(Activation('sigmoid')) 



reduce_learning_ontop = ReduceLROnPlateau(monitor='val_acc', factor=0.9, patience=25, min_lr=0.000001, verbose=1)
model.compile(Adam(lr=.0001),loss='binary_crossentropy', metrics=['accuracy']) 
myModel = model.fit(X_train, Y_train, batch_size=128, epochs=160, verbose=1, shuffle=True, validation_data=(X_test, Y_test))

1 个答案:

答案 0 :(得分:2)

这里有两件事:

  1. 更改激活:

    model.add(Activation('softmax')) 
    

    sigmoid专为二进制分类而设计 - 在多类分类的情况下 - softmax是最先进的激活方式。

  2. 变更损失:

    model.compile(
        Adam(lr=.0001),
        loss='categorical_crossentropy', metrics=['accuracy']) 
    

    binary_crossentropy也是为binary_classification设计的。与此相当的是categorical_crossentropy