使用tensorflow0.12的MultiRNN时的展开时间步长会引发TypeError

时间:2018-12-01 08:59:28

标签: python-3.x tensorflow

我在rnn_ptb.py中建立了一个类似于PTB模型的类。展开时间步长时,出现如下错误:

  

文件   “ D:\ anaconda3 \ anaconda3 \ lib \ site-packages \ tensorflow \ python \ ops \ rnn_cell_impl.py”,   第1282行,在致电中       输出,new_state = self._cell(输入,状态,作用域=范围)

     

文件   “ D:\ anaconda3 \ anaconda3 \ lib \ site-packages \ tensorflow \ python \ ops \ rnn_cell_impl.py”,   第370行,在致电中       * args,** kwargs)

     

文件   “ D:\ anaconda3 \ anaconda3 \ lib \ site-packages \ tensorflow \ python \ layers \ base.py”,   第374行,在致电中       输出=超级(层,自我)。调用(输入,* args,** kwargs)

     

文件   “ D:\ anaconda3 \ anaconda3 \ lib \ site-packages \ tensorflow \ python \ keras \ engine \ base_layer.py”,   第746行,在致电中       self.build(input_shapes)

     

文件   “ D:\ anaconda3 \ anaconda3 \ lib \ site-packages \ tensorflow \ python \ keras \ utils \ tf_utils.py”,   包装中的第149行       output_shape = fn(instance,input_shape)

     

文件   “ D:\ anaconda3 \ anaconda3 \ lib \ site-packages \ tensorflow \ python \ ops \ rnn_cell_impl.py”,   923行,正在构建       shape = [input_depth + h_depth,4 * self._num_units],

     

TypeError:只能将元组(不是“ int”)连接到元组

代码如下:

class trans_LSTM_model():
def __init__(self,is_training,batch_size,num_steps,input_size,output_size,\
             hidden_size,num_layers,learning_rate,dropout_ratio):
    self.batch_size = batch_size
    self.num_steps = num_steps
    self.input_size = input_size
    self.hidden_size = hidden_size
    self.keep_ratio = 1-dropout_ratio
    self.inputs = tf.placeholder(tf.float32,[batch_size,num_steps,input_size])
    self.targets = tf.placeholder(tf.float32,[batch_size,output_size])
    lstm_cell = tf.nn.rnn_cell.LSTMCell(num_units=hidden_size,forget_bias=1.0)
    if is_training:
        lstm_cell = tf.nn.rnn_cell.DropoutWrapper(lstm_cell,output_keep_prob=self.keep_ratio)

    #cell = tf.nn.rnn_cell.MultiRNNCell([lstm_cell]*num_layers,state_is_tuple=True)
    cell = lstm_cell
    self.initial_state = cell.zero_state(self.batch_size,tf.float32)
    inputs = tf.unstack(self.inputs,num_steps,axis=1)
    if is_training:
        self.inputs = tf.nn.dropout(self.inputs,self.keep_ratio)
    #print(inputs)
    outputs = []
    state = self.initial_state
    print(cell.state_size)
    print(state)
    #outputs,state = tf.nn.dynamic_rnn(cell,[inputs],initial_state=state,dtype=tf.float32)
    with tf.variable_scope('RNN',reuse=tf.AUTO_REUSE):
        #该次循环计算一个batch中不同时间步的计算,即一个cell,透过率计算需要保存每个时间步的输出,因此outputs的所有数据都保存
        for step in range(num_steps):
            #print(inputs[step])
            if step>0:
                tf.get_variable_scope().reuse_variables()
            (cell_output,state) = cell([inputs[step]],state)#需将inputs加一个括号,转为list,细节查看RNN
            #cell_output,state = tf.nn.static_rnn(cell,inputs[step],initial_state=state,dtype=tf.float32)
            outputs.append(cell_output)

    self.final_state = state
    outputs = np.array(outputs)
    weights = tf.get_variable('weights',[hidden_size,output_size])
    bias = tf.get_variable('bias',[output_size])
    logits = tf.matmul(outputs,weights)+np.tile(bias,batch_size,axis=0) #查看如何保存每个cell的输出
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits,labels=self.targets))
    if not is_training:
        return
    trainable_variables = tf.trainable_variables()
    grads,_ = tf.clip_by_global_norm(tf.gradients(loss,trainable_variables),FLAGS.clip)
    optimizer = tf.train.AdamOptimizer(learning_rate)
    self.train_op = optimizer.apply_gradients(zip(grads,trainable_variables))

“(cell_output,state)= cell([inputs [step]],state)”错误

我只是不知道为什么会出现此错误。寻找一些答案。非常感谢。

0 个答案:

没有答案