我实现了一个小的LSTM神经网络来预测电影的音符。 但是我有一个解释问题,无法将model.predit发送给我的prob_result转换回所需的标签
代码:
data = pd.read_csv("data/critics.notes.csv")
data = data["Comment","Note"]
#Comment is text and Note is numeric
data["Comment"] = data["Comment"].apply(lambda x: x.lower())
#Tokenizing
max_fatures = 10000
tokenizer = Tokenizer(num_words=max_fatures, split = ' ')
tokenizer.fit_on_texts(data["Comment"].values)
X = tokenizer.texts_to_sequences(data['Comment'].values)
X = pad_sequences(X)
#LSTM Model
embed_dim = 128
lstm_out = 196
labels = data['Note'].unique()
num_classes = len(labels)
model = Sequential()
model.add(Embedding(max_fatures, embed_dim,input_length = X.shape[1]))
model.add(SpatialDropout1D(0.4))
model.add(LSTM(lstm_out, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(num_classes,activation='softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer='adam',metrics = ['accuracy'])
model.summary()
batch_size = 8
model.fit(X_train, Y_train, epochs = 20, batch_size=batch_size, verbose = 2)
test_sentences= ["This movie is really pathetic. This is very disappointing. "]
#vectorizing the tweet by the pre-fitted tokenizer instance
twt = tokenizer.texts_to_sequences(test_sentences)
#padding the tweet to have exactly the same shape as `embedding_2` input
test_sentences= pad_sequences(test_sentences, maxlen=X.shape[1], dtype='int32', value=0)
sentiment = model.predict(test_sentences,batch_size=1,verbose = 2)[0]
#predicted_label = sorted(labels)[sentiment.argmax(axis=-1)]
predicted_label = labels[sentiment.argmax(axis=-1)]
print(predicted_label)
我的问题是在argmax查找标签之前是否应该使用“排序”命令? 如果是,那为什么呢?
提前谢谢