情绪分类模型 - RNN和LSTM

时间:2018-03-25 13:53:07

标签: tensorflow lstm sentiment-analysis rnn

我正在尝试使用LSTM单位训练RNN以进行情绪分类。 我已将数据集标记为0表示neg,1表示pos。 我的图表看起来像这样:

input= tf.placeholder(tf.int32, [None, max_length], name='input')
se_len=tf.placeholder(tf.int32, [None], name='lengths')
target= tf.placeholder(tf.float32, [None, n_classes], name='target')
drop_keep_prob= tf.placeholder(tf.float32, name='dropout_keep_prob')

embeddings = tf.Variable(tf.random_uniform([vocab_size, embedding_size], -1, 
          1, seed=seed), name='embed_var')
embedded_words = tf.nn.embedding_lookup(embeddings, x) 

outputs = embedded_words  # [batch_size, max_length, embedding_size]
for h in hidden_size:
    lstm_cell = tf.nn.rnn_cell.LSTMCell(hidden_size, state_is_tuple=True)
    lstm_cell = tf.nn.rnn_cell.DropoutWrapper(lstm_cell, 
    input_keep_prob=dropout_keep_prob,output_keep_prob=dropout_keep_prob, 
    seed=seed)
    outputs, _ = tf.nn.dynamic_rnn(lstm_cell, outputs, dtype=tf.float32, 
        sequence_length=seq_len)


outputs = tf.reduce_mean(outputs, reduction_indices=[1])
w = tf.Variable(tf.truncated_normal([hidden_size[-1], n_classes], seed=random_state)) 

b = tf.Variable(tf.constant(0.1, shape=[n_classes]))
scores = tf.nn.xw_plus_b(outputs, w, b, name='scores')  ##[batch_size, n_classes]

predict = tf.nn.softmax(scores, name='predictions')

losses = tf.nn.softmax_cross_entropy_with_logits(logits=scores, 
 labels=target, name='cross_entropy')
loss = tf.reduce_mean(losses, name='loss')
# train_ step=rmsoptimize().minimize(loss)

## cores / predict
correct_pred = tf.equal(tf.argmax(scores, 1), tf.argmax(target, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

如果我只想对一个句子进行分类......并且使用该模型,我不知道该怎么做...到目前为止我尝试的是创建索引向量(根据我的词典)的大小最大序列长度并将其提供给模型,如下所示:

def predict(tensor):
    graph = tf.Graph()
    with graph.as_default():
        sess = tf.Session()
        saver = tf.train.import_meta_graph("{}/model.ckpt.meta".format(FLAGS.checkpoints_dir))
        saver.restore(sess, ("{}/model.ckpt".format(FLAGS.checkpoints_dir)))

        input = graph.get_operation_by_name('input').outputs[0]
        seq_len = graph.get_operation_by_name('lengths').outputs[0]
        dropout_keep_prob = graph.get_operation_by_name('dropout_keep_prob').outputs[0]
        prediction = graph.get_operation_by_name('final_layer/softmax/predictions').outputs[0]

score = sess.run(prediction, feed_dict={input: tensor.reshape(1,convector.sequence_len), seq_len:[convector.sequence_len],dropout_keep_prob: 1.0})
            print('Predicted sentiment: [{0:.4f}, {1:.4f}]'.format(score[0, 0], score[0, 1]))
            return score
我做错了什么? 我如何知道如何选择超参数以便它可以与欠装或过度拟合等案件合作?

0 个答案:

没有答案