我正在使用Keras模型对约50个类别进行分类,并且获得了约90%的准确性。但是,预测完全没有完成!我也正在使用SVM,我可以输入字符串并获得非常合理的预测,神经网络似乎是一个问题。
我查看了以下链接:
Keras model with high accuracy but poor predictions
good accuracy , but bad prediction with keras
但是它们并没有导致任何解决方案。
def kerasModel(df):
train_size = int(len(df) * .7)
print ("Train size: %d" % train_size)
print ("Test size: %d" % (len(df) - train_size))
train_posts = df['text'][:train_size]
train_tags = df['classes'][:train_size]
test_posts = df['text'][train_size:]
test_tags = df['classes'][train_size:]
tag = df['classes'].tolist()
my_tags2 = set(tag)
print(my_tags2)
max_words = 2000
tokenize = text.Tokenizer(num_words=max_words, char_level=False)
tokenize.fit_on_texts(train_posts) # only fit on train
x_train = tokenize.texts_to_matrix(train_posts)
x_test = tokenize.texts_to_matrix(test_posts)
encoder = LabelEncoder()
encoder.fit(train_tags)
y_train = encoder.transform(train_tags)
y_test = encoder.transform(test_tags)
num_classes = np.max(y_train) + 1
y_train = utils.to_categorical(y_train, num_classes)
y_test = utils.to_categorical(y_test, num_classes)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('y_train shape:', y_train.shape)
print('y_test shape:', y_test.shape)
batch_size = 50
epochs = 3
# Build the model
model = Sequential()
model.add(Dense(256, input_shape=(max_words,)))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_split=0.1)
score = model.evaluate(x_test, y_test,
batch_size=batch_size, verbose=1)
print('Test accuracy:', score[1])
# define Tokenizer with Vocab Size
tokenizer = Tokenizer(num_words=max_words)
tokenizer.fit_on_texts(train_posts)
# creates a HDF5 file 'my_model.h5'
model.model.save('my_model.h5')
# Save Tokenizer i.e. Vocabulary
with open('tokenizer.pickle', 'wb') as handle:
pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
然后,在主要部分:
X = df.text
y = df.classes
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state = 42)
# Fun Stuff
kerasModel(df)
# load our saved model
model = load_model('my_model.h5')
# load tokenizer
tokenizer = Tokenizer()
with open('tokenizer.pickle', 'rb') as handle:
tokenizer = pickle.load(handle)
while True:
inp = input()
x_data = [inp]
x_data_series = pd.Series(inp)
x_tokenized = tokenizer.texts_to_matrix(x_data, mode='tfidf')
# print(x_tokenized)
# print(type(x_tokenized))
prediction = model.predict(np.array(x_tokenized))
print(prediction)
predicted_label = my_tags[np.argmax(prediction[0])]
idxs = np.argsort(prediction)[::-1][:1]
# print("Predicted label: " + predicted_label)
for item in idxs[0]:
print(my_tags[item])
我正在打印置信度和输出,并且预测非常不正确!我确实得到了重复,因为在同一件事中给我相同的输出,但是X可以一次映射到一个类,然后在我重新训练时映射到另一个类。
我确实觉得我的标签混在一起了吗?但是我打印了标签,这很有意义,我觉得我已经很近了,但是一直在努力奋斗了大约2个小时。
谢谢,祝你有美好的一天!