如何更改Tensorflow RNN模型中的最大序列长度?

时间:2017-07-14 21:08:22

标签: machine-learning tensorflow lstm rnn

我目前正在尝试调整我的张量流分类器,它能够将一系列单词标记为正或负,以处理更长的序列,而无需重新训练。我的模型是一个RNN,最大序列长度为210.一个输入是一个单词(300 dim),我用Googles word2vec对单词进行矢量化,所以我能够输入一个最多210个单词的序列。现在我的问题是,如何将最大序列长度更改为例如3000,以便对电影评论进行分类。

我的工作模型,固定最大序列长度为210(tf_version:1.1.0):

n_chunks = 210
chunk_size = 300

x = tf.placeholder("float",[None,n_chunks,chunk_size])
y = tf.placeholder("float",None)
seq_length = tf.placeholder("int64",None)


with tf.variable_scope("rnn1"):
        lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size, 
                                            state_is_tuple=True)

        lstm_cell = tf.contrib.rnn.DropoutWrapper (lstm_cell, 
                                                   input_keep_prob=0.8)

        outputs, _ = tf.nn.dynamic_rnn(lstm_cell,x,dtype=tf.float32, 
                                       sequence_length = self.seq_length)

fc = tf.contrib.layers.fully_connected(outputs, 1000, 
                                      activation_fn=tf.nn.relu)

output = tf.contrib.layers.flatten(fc)

#*1
logits = tf.contrib.layers.fully_connected(output, self.n_classes, 
                                            activation_fn=None) 

cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits 
                                        (logits=logits, labels=y) )
optimizer = tf.train.AdamOptimizer(learning_rate=0.01).minimize(cost)

...
#train
#train_x padded to fit(batch_size*n_chunks*chunk_size)
sess.run([optimizer, cost], feed_dict={x:train_x, y:train_y, 
                                                     seq_length:seq_length})
#predict:
...

pred = tf.nn.softmax(logits)
pred = sess.run(pred,feed_dict={x:word_vecs, seq_length:sq_l})

我已尝试过哪些修改:

1用无替换n_chunks并简单地在

中提供数据
x = tf.placeholder(tf.float32, [None,None,300])
#model fails to build
#ValueError: The last dimension of the inputs to `Dense` should be defined. 
#Found `None`.
# at *1

...
#all entrys in word_vecs still have got the same length for example 
#3000(batch_size*3000(!= n_chunks)*300)
pred = tf.nn.softmax(logits)
pred = sess.run(pred,feed_dict={x:word_vecs, seq_length:sq_l})

2更改x然后恢复旧模型:

x = tf.placeholder(tf.float32, [None,n_chunks*10,chunk_size]
...
saver = tf.train.Saver(tf.all_variables(), reshape=True)
saver.restore(sess,"...")
#fails as well:
#InvalidArgumentError (see above for traceback): Input to reshape is a 
#tensor with 420000 values, but the requested shape has 840000
#[[Node: save/Reshape_5 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, 
#_device="/job:localhost/replica:0/task:0/cpu:0"](save/RestoreV2_5, 
#save/Reshape_5/shape)]]

# run prediction

如果有可能,请您提供任何工作实例或解释我为什么不这样做?

1 个答案:

答案 0 :(得分:0)

我只是想知道你为什么不直接给n_chunk赋值3000?

在您的第一次尝试中,您不能使用两个无,因为tf不能为每个维度放置多少维度。第一个维度设置为“无”,因为它取决于批量大小。在第二次尝试中,您只需更改一个位置,而使用n_chunks的其他位置可能会与x占位符冲突。