如何确定由Keras上的卷积神经网络预测的二进制类?

时间:2018-08-25 15:22:36

标签: python machine-learning keras deep-learning text-classification

我正在建立一个CNN,以便对Keras进行情感分析。 一切工作正常,模型已经过培训,可以投入生产。

但是,当我尝试使用方法model.predict()预测新的未标记数据时,它仅输出关联的概率。我尝试使用方法np.argmax(),但即使它应该为1,它也会始终输出0(在测试集上,我的模型达到了80%的精度)。

这是我用于预处理数据的代码:

# Pre-processing data
x = df[df.Sentiment != 3].Headlines
y = df[df.Sentiment != 3].Sentiment

# Splitting training, validation, testing dataset
x_train, x_validation_and_test, y_train, y_validation_and_test = train_test_split(x, y, test_size=.3,
                                                                                      random_state=SEED)
x_validation, x_test, y_validation, y_test = train_test_split(x_validation_and_test, y_validation_and_test,
                                                                  test_size=.5, random_state=SEED)

tokenizer = Tokenizer(num_words=NUM_WORDS)
tokenizer.fit_on_texts(x_train)

sequences = tokenizer.texts_to_sequences(x_train)
x_train_seq = pad_sequences(sequences, maxlen=MAXLEN)

sequences_val = tokenizer.texts_to_sequences(x_validation)
x_val_seq = pad_sequences(sequences_val, maxlen=MAXLEN)

sequences_test = tokenizer.texts_to_sequences(x_test)
x_test_seq = pad_sequences(sequences_test, maxlen=MAXLEN)

这是我的模特:

MAXLEN = 25
NUM_WORDS = 5000
VECTOR_DIMENSION = 100

tweet_input = Input(shape=(MAXLEN,), dtype='int32')

tweet_encoder = Embedding(NUM_WORDS, VECTOR_DIMENSION, input_length=MAXLEN)(tweet_input)

# Combinating n-gram to optimize results
bigram_branch = Conv1D(filters=100, kernel_size=2, padding='valid', activation="relu", strides=1)(tweet_encoder)
bigram_branch = GlobalMaxPooling1D()(bigram_branch)
trigram_branch = Conv1D(filters=100, kernel_size=3, padding='valid', activation="relu", strides=1)(tweet_encoder)
trigram_branch = GlobalMaxPooling1D()(trigram_branch)
fourgram_branch = Conv1D(filters=100, kernel_size=4, padding='valid', activation="relu", strides=1)(tweet_encoder)
fourgram_branch = GlobalMaxPooling1D()(fourgram_branch)
merged = concatenate([bigram_branch, trigram_branch, fourgram_branch], axis=1)

merged = Dense(256, activation="relu")(merged)
merged = Dropout(0.25)(merged)
output = Dense(1, activation="sigmoid")(merged)

optimizer = optimizers.adam(0.01)

model = Model(inputs=[tweet_input], outputs=[output])
model.compile(loss="binary_crossentropy", optimizer=optimizer, metrics=['accuracy'])
model.summary()

# Training the model
history = model.fit(x_train_seq, y_train, batch_size=32, epochs=5, validation_data=(x_val_seq, y_validation))

我还尝试将最终Dense层上的激活次数从1更改为2,但出现错误:

Error when checking target: expected dense_12 to have shape (2,) but got array with shape (1,)

1 个答案:

答案 0 :(得分:3)

您正在执行二进制分类。因此,您有一个包含一个单元的密集层,其激活功能为sigmoid。乙状结肠功能输出范围为[0,1]的值,该值对应于给定样本属于肯定类别(即类别1)的概率。低于0.5的所有内容都标记为零(即否定类别),高于0.5的所有内容都标记为1。因此,要查找预测的类,您可以执行以下操作:

preds = model.predict(data)
class_one = preds > 0.5

class_one的真实元素对应于标有1(即正类)的样本。

奖励:要找到预测的准确性,您可以轻松地将class_one与真实标签进行比较:

acc = np.mean(class_one == true_labels)

请注意,我假设true_labels由零和一组成。


此外,如果您的模型是使用Sequential类定义的,则可以轻松使用predict_classes方法:

pred_labels = model.predict_classes(data)

但是,由于您正在使用Keras功能API来构建模型(我认为这样做非常好),因此您不能使用predict_classes方法,因为该方法对于这样的模型。