使用经过训练的角色级别LSTM模型生成文本

时间:2017-04-13 11:45:58

标签: machine-learning tensorflow lstm generative

我训练了一个模型,目的是产生如下句子: 我作为训练示例2提供序列:x是一系列字符,y是相同的一个移位。该模型基于LSTM,并使用tensorflow创建。
我的问题是:因为模型会输入一定大小的输入序列(在我的情况下是50),我怎样才能做出预测,只给他一个字符 作为种子?我在一些例子中看到过,在训练之后,他们通过简单地输入单个字符来生成句子。
这是我的代码:

    with tf.name_scope('input'):
        x = tf.placeholder(tf.float32, [batch_size, truncated_backprop], name='x')
        y = tf.placeholder(tf.int32, [batch_size, truncated_backprop], name='y')

    with tf.name_scope('weights'):
        W = tf.Variable(np.random.rand(n_hidden, num_classes), dtype=tf.float32)
        b = tf.Variable(np.random.rand(1, num_classes), dtype=tf.float32)

    inputs_series = tf.split(x, truncated_backprop, 1)
    labels_series = tf.unstack(y, axis=1)

    with tf.name_scope('LSTM'):
        cell = tf.contrib.rnn.BasicLSTMCell(n_hidden, state_is_tuple=True)
        cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=dropout)
        cell = tf.contrib.rnn.MultiRNNCell([cell] * n_layers)

    states_series, current_state = tf.contrib.rnn.static_rnn(cell, inputs_series, \
        dtype=tf.float32)

    logits_series = [tf.matmul(state, W) + b for state in states_series]
    prediction_series = [tf.nn.softmax(logits) for logits in logits_series]

    losses = [tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=labels) \
        for logits, labels, in zip(logits_series, labels_series)]
    total_loss = tf.reduce_mean(losses)

    train_step = tf.train.AdamOptimizer(learning_rate).minimize(total_loss)

1 个答案:

答案 0 :(得分:3)

我建议您使用dynamic_rnn而不是static_rnn,它会在执行期间创建图表并允许您输入任意长度。您的输入占位符将是

x = tf.placeholder(tf.float32, [batch_size, None, features], name='x')

接下来,您需要一种方法将自己的初始状态输入网络。您可以将initial_state参数传递给dynamic_rnn,例如:

initialstate = cell.zero_state(batch_sie, tf.float32)
outputs, current_state = tf.nn.dynamic_rnn(cell,
                                           inputs,
                                           initial_state=initialstate)

这样,为了从单个字符生成文本,您可以一次输入图形1个字符,每次都传入前一个字符和状态,如:

prompt = 's' # beginning character, whatever
inp = one_hot(prompt) # preprocessing, as you probably want to feed one-hot vectors
state = None
while True:
    if state is None:
        feed = {x: [[inp]]}
    else:
        feed = {x: [[inp]], initialstate: state}

    out, state = sess.run([outputs, current_state], feed_dict=feed)

    inp = process(out) # extract the predicted character from out and one-hot it