将tf.nn.dynamic_rnn移动到GPU

时间:2017-09-02 16:59:26

标签: python tensorflow gpu rnn

我正在使用以下设置: python上的Fedora 26,NVIDIA GTX 970,CUDA 8.0,CUDNN 6.0和tensorflow-gpu == 1.3.0

我的问题是,当强制dynamic_rnn运算符在我的gpu上运行时使用:

with tf.name_scope('encoder_both_rnn'),tf.device('/gpu:0'):
            _, encoder_state_final_forward = tf.nn.dynamic_rnn(self.encoder_cell_forward,input_ph,dtype=tf.float32,time_major=False,sequence_length=sequence_length,scope='encoder_rnn_forward')
            _, encoder_state_final_reverse = tf.nn.dynamic_rnn(self.encoder_cell_reverse,input_reverse,dtype=tf.float32,time_major=False,sequence_length=sequence_length,scope='encoder_rnn_reverse')

调用全局变量初始值设定项时收到以下错误:

  

InvalidArgumentError:节点'init / NoOp':未知输入节点'^ drawlog_vae_test.DrawlogVaeTest.queue_training / encoder / encoder / encoder_W_mean / Variable / Assign'

使用以下语句创建变量:

self.encoder_W_mean = u.weight_variable([self.intermediate_state_size * 2,self.intermediate_state_size*2],name='encoder_W_mean')

def weight_variable(shape,name=None,use_lambda_init=False):
with tf.name_scope(name):
    num_weights = float(reduce(lambda x,y: x*y,shape))
    initial = tf.truncated_normal(shape,stddev=1) * math.sqrt(2.0/num_weights)
    if use_lambda_init:
        initial = lambda: np.random.normal(size=shape)
    return tf.Variable(initial,dtype=tf.float32)

关于这一点的奇怪之处在于,变量几乎与两个rnns无关。有没有机会在GPU上运行我的rnn?或者这只是一个奇怪的错误,告诉我我不能在GPU上运行rnn?

0 个答案:

没有答案