在while_loop的上下文中使用TensorArrays来累积值

时间:2016-06-14 21:06:44

标签: tensorflow recurrent-neural-network

下面我有一个Tensorflow RNN Cell的实现,旨在模拟Alex Graves'算法ACT在本文中:http://arxiv.org/abs/1603.08983

在通过rnn.rnn调用的序列中的单个时间步(带有静态sequence_length参数,因此rnn动态展开 - 我使用的固定批量大小为20),我们递归调用ACTStep,生成大小的输出( 1,200)其中RNN单元的隐藏尺寸为200,批量大小为1.

在Tensorflow中使用while循环,我们迭代直到累积的停止概率足够高。所有这些工作都相当顺利,但我在while循环中积累状态,概率和输出时遇到了问题,我们需要这样做才能创建这些的加权组合作为最终的单元输出/状态。

我尝试过使用一个简单的列表,如下所示,但是当编译图形时输出不在同一帧中会失败(是否可以在control_flow_ops中使用" switch"函数)将张量转发到需要它们的点,即在我们返回值之前的add_n函数?)。我也尝试使用TensorArray结构,但我发现这很难使用,因为它似乎破坏了形状信息并且手动替换它没有用。我还没有找到关于TensorArrays的大量文档,我想这可能是因为它们主要用于内部TF使用。

有关如何正确累积ACTStep产生的变量的任何建议将非常感激。

class ACTCell(RNNCell):
"""An RNN cell implementing Graves' Adaptive Computation time algorithm"""
def __init__(self, num_units, cell, epsilon, max_computation):

    self.one_minus_eps = tf.constant(1.0 - epsilon)
    self._num_units = num_units
    self.cell = cell
    self.N = tf.constant(max_computation)
@property
def input_size(self):
    return self._num_units
@property
def output_size(self):
    return self._num_units
@property
def state_size(self):
    return self._num_units

def __call__(self, inputs, state, scope=None):

    with vs.variable_scope(scope or type(self).__name__):

        # define within cell constants/ counters used to control while loop
        prob = tf.get_variable("prob", [], tf.float32,tf.constant_initializer(0.0))
        counter = tf.get_variable("counter", [],tf.float32,tf.constant_initializer(0.0))
        tf.assign(prob,0.0)
        tf.assign(counter, 0.0)

        # the predicate for stopping the while loop. Tensorflow demands that we have
        # all of the variables used in the while loop in the predicate.
        pred = lambda prob,counter,state,input,\
                      acc_state,acc_output,acc_probs:\
            tf.logical_and(tf.less(prob,self.one_minus_eps), tf.less(counter,self.N))

        acc_probs = []
        acc_outputs = []
        acc_states = []


        _,iterations,_,_,acc_states,acc_output,acc_probs = \
        control_flow_ops.while_loop(pred,
        self.ACTStep,
        [prob,counter,state,input,acc_states,acc_outputs,acc_probs])

    # TODO:fix last part of this, need to use the remainder.
    # TODO: find a way to accumulate the regulariser

    # here we take a weighted combination of the states and outputs 
    # to use as the actual output and state which is passed to the next timestep.

    next_state = tf.add_n([tf.mul(x,y) for x,y in zip(acc_probs,acc_states)])
    output = tf.add_n([tf.mul(x,y) for x,y in zip(acc_probs,acc_outputs)])


    return output, next_state

def ACTStep(self,prob,counter,state,input, acc_states,acc_outputs,acc_probs):

    output, new_state = rnn.rnn(self.cell, [input], state, scope=type(self.cell).__name__)

    prob_w = tf.get_variable("prob_w", [self.cell.input_size,1])
    prob_b = tf.get_variable("prob_b", [1])
    p = tf.nn.sigmoid(tf.matmul(prob_w,new_state) + prob_b)

    acc_states.append(new_state)
    acc_outputs.append(output)
    acc_probs.append(p)

    return [tf.add(prob,p),tf.add(counter,1.0),new_state, input,acc_states,acc_outputs,acc_probs]

1 个答案:

答案 0 :(得分:3)

我将在此回复前言,这不是一个完整的解决方案,而是一些关于如何改善细胞的评论。

首先,在ACTStep功能中,您拨打rnn.rnn一个时间步长(由[input]定义。如果您执行单个时间步骤,则简单来说可能更有效使用实际的self.cell调用函数。您将看到tensorflow中使用的相同机制rnncell wrappers

您提到您已尝试使用TensorArrays。你有适当的包装和拆包Tensorarrays吗?这是一个repo,您可以在model.py下找到tensorarray正确打包和解包。

您还询问control_flow_ops中是否存在需要累积所有张量的函数。我想你正在寻找tf.control_dependencies

您可以在control_dependicies中列出所有输出张量操作,这将需要张量流来计算到那时的所有张量。

此外,您的counter变量似乎可以训练。你确定你想要这样吗?如果您向计数器添加加1,则可能无法产生正确的结果。另一方面,你可以故意保持它的可训练性,以便在思考成本函数的最后区分它。

另外我相信剩余功能应该在你的脚本中:

remainder = 1.0 - tf.add_n(acc_probs[:-1])
#note that there is a -1 in the list as you do not want to grab the last probability

以下是我编辑的代码版本:

class ACTCell(RNNCell):
    """An RNN cell implementing Graves' Adaptive Computation time algorithm
    Notes: https://www.evernote.com/shard/s189/sh/fd165646-b630-48b7-844c-86ad2f07fcda/c9ab960af967ef847097f21d94b0bff7

    """
    def __init__(self, num_units, cell, max_computation = 5.0, epsilon = 0.01):

        self.one_minus_eps = tf.constant(1.0 - epsilon) #episolon is 0.01 as found in the paper
        self._num_units = num_units
        self.cell = cell
        self.N = tf.constant(max_computation)

    @property
    def input_size(self):
        return self._num_units
    @property
    def output_size(self):
        return self._num_units
    @property
    def state_size(self):
        return self._num_units

    def __call__(self, inputs, state, scope=None):

        with vs.variable_scope(scope or type(self).__name__):

            # define within cell constants/ counters used to control while loop
            prob = tf.constant(0.0, shape = [batch_size]) 
            counter = tf.constant(0.0, shape = [batch_size])

            # the predicate for stopping the while loop. Tensorflow demands that we have
            # all of the variables used in the while loop in the predicate.
            pred = lambda prob,counter,state,input,acc_states,acc_output,acc_probs:\
                tf.logical_and(tf.less(prob,self.one_minus_eps), tf.less(counter,self.N))

            acc_probs, acc_outputs, acc_states  = [], [], []

            _,iterations,_,_,acc_states,acc_output,acc_probs = \
            control_flow_ops.while_loop(
            pred,
            self.ACTStep, #looks like he purposely makes the while loop here
            [prob, counter, state, input, acc_states, acc_outputs, acc_probs])

        '''mean-field updates for states and outputs'''
        next_state = tf.add_n([x*y for x,y in zip(acc_probs,acc_states)])
        output = tf.add_n([x*y for x,y in zip(acc_probs,acc_outputs)])

        remainder = 1.0 - tf.add_n(acc_probs[:-1]) #you take the last off to avoid a negative ponder cost #the problem here is we need to take the sum of all the remainders
        tf.add_to_collection("ACT_remainder", remainder) #if this doesnt work then you can do self.list based upon timesteps
        tf.add_to_collection("ACT_iterations", iterations)
        return output, next_state 

    def ACTStep(self,prob, counter, state, input, acc_states, acc_outputs, acc_probs):

        '''run rnn once'''
        output, new_state = rnn.rnn(self.cell, [input], state, scope=type(self.cell).__name__)

        prob_w = tf.get_variable("prob_w", [self.cell.input_size,1]) 
        prob_b = tf.get_variable("prob_b", [1])
        halting_probability = tf.nn.sigmoid(tf.matmul(prob_w,new_state) + prob_b) 


        acc_states.append(new_state)
        acc_outputs.append(output)
        acc_probs.append(halting_probability) 

        return [p + prob, counter + 1.0, new_state, input,acc_states,acc_outputs,acc_probs]


    def PonderCostFunction(self, time_penalty = 0.01):
        '''
        note: ponder is completely different than probability and ponder = roe

        the ponder cost function prohibits the rnn from cycling endlessly on each timestep when not much is needed
        '''
        n_iterations = tf.get_collection_ref("ACT_iterations")
        remainder = tf.get_collection_ref("ACT_remainder")
        return tf.reduce_sum(n_iterations + remainder) #completely different from probability

这是一份复杂的文章,我自己一直在努力实施。我不介意与你合作在Tensorflow中完成它。如果您有兴趣,请通过Skype在LeavesBreathe加我,我们可以从那里开始。