使用训练有素的模型预测标签

时间:2019-01-02 21:02:54

标签: python keras

我的数据集有两列,即post和tag(“某些文本”,“ tag”),并且我已经成功地训练了模型,准确性达到了98%。问题是我现在该如何输入其他文本并让模型预测会是什么标签? 我已经搜索了教程,但没有找到任何教程(很少进行测试,但在本示例中不适用)如何使用数据集之外的数据(例如文本输入)测试模型,以便模型可以预测。 这就是我到目前为止所拥有的。...

import keras 
import numpy as np
from keras.preprocessing.text import Tokenizer
import numpy as np
import pandas as pd
from keras.models import Sequential
from keras.layers import Dense
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Input, Dense, Dropout, Embedding, LSTM, Flatten
from keras.models import Model
from keras.utils import to_categorical
from keras.callbacks import ModelCheckpoint
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
data = pd.read_csv('dataset3.csv')
print(data.head(10))
print(data.tags.value_counts())
data['target'] = data.tags.astype('category').cat.codes
data['num_words'] = data.post.apply(lambda x : len(x.split()))
bins=[0,50,75, np.inf]
data['bins']=pd.cut(data.num_words, bins=[0,100,300,500,800, np.inf], labels=['0-100', '100-300', '300-500','500-800' ,'>800'])
word_distribution = data.groupby('bins').size().reset_index().rename(columns={0:'counts'})
word_distribution.head()
num_class = len(np.unique(data.tags.values))
y = data['target'].values
MAX_LENGTH = 500
tokenizer = Tokenizer()
tokenizer.fit_on_texts(data.post.values)
post_seq = tokenizer.texts_to_sequences(data.post.values)
post_seq_padded = pad_sequences(post_seq, maxlen=MAX_LENGTH)
X_train, X_test, y_train, y_test = train_test_split(post_seq_padded, y, test_size=0.05)
vocab_size = len(tokenizer.word_index) + 1
inputs = Input(shape=(MAX_LENGTH, ))
embedding_layer = Embedding(vocab_size,
                            128,
                            input_length=MAX_LENGTH)(inputs)
x = Flatten()(embedding_layer)
x = Dense(32, activation='relu')(x)

predictions = Dense(num_class, activation='softmax')(x)
model = Model(inputs=[inputs], outputs=predictions)
model.compile(optimizer='adam',
              loss='categorical_crossentropy',
              metrics=['acc'])

model.summary()
filepath="weights-simple.hdf5"
checkpointer = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
history = model.fit([X_train], batch_size=64, y=to_categorical(y_train), verbose=1, validation_split=0.25, 
          shuffle=True, epochs=5, callbacks=[checkpointer])
df = pd.DataFrame({'epochs':history.epoch, 'accuracy': history.history['acc'], 'validation_accuracy': history.history['val_acc']})
g = sns.pointplot(x="epochs", y="accuracy", data=df, fit_reg=False)
g = sns.pointplot(x="epochs", y="validation_accuracy", data=df, fit_reg=False, color='green')
predicted = model.predict(X_test)
predicted = np.argmax(predicted, axis=1)
accuracy_score(y_test, predicted)   

0 个答案:

没有答案