Keras / Tensorflow中的辍学动态切换

时间:2018-12-14 16:33:11

标签: python tensorflow keras keras-layer dropout

我正在Tensorflow中构建强化学习算法,我希望能够在一次调用session.run()的情况下动态关闭并重新打开辍学。

理性:我需要(1)进行不包含辍学的正向传递以计算目标; (2)对生成的目标进行训练。如果我在对session.run()的不同调用中执行了这两个步骤,则一切正常。但是我想通过一次调用session.run()(使用tf.stop_gradients(targets))来做到这一点。

尝试了很多没有成功的解决方案之后,我找到了一个解决方案,其中我用一个变量替换了Keras使用的 learning_phase 占位符(因为占位符是张量,并且不允许赋值)并使用一个自定义图层,可根据需要将该变量设置为True或False。此解决方案显示在下面的代码中。分别获取m1m2的值(例如,运行sess.run(m1, feed_dict={ph:np.ones((1,1))})可以正常工作而不会出现错误。但是,获取m3的值或获取值m1m2的同时运行有时会起作用,有时却不会起作用(并且错误消息是无意义的)。

您知道我做错了什么还是做我想要做的更好的方法吗?

编辑:该代码显示了一个玩具示例。实际上,我只有一个模型,我需要运行两次正向传递(一个具有退出功能,另一个具有退出功能)和一个后退功能。我想做的所有事情都不需要返回python。

from tensorflow.keras.layers import Dropout, Dense, Input, Layer
from tensorflow.python.keras import backend as K
from tensorflow.keras import Model
import tensorflow as tf
import numpy as np

class DropoutSwitchLayer(Layer):
  def __init__(self, stateful=True, **kwargs):
    self.stateful = stateful
    self.supports_masking = True
    super(DropoutSwitchLayer, self).__init__(**kwargs)

  def build(self, input_shape):
    self.lph = tf.Variable(True, dtype=tf.bool, name="lph", trainable=False)
    K._GRAPH_LEARNING_PHASES[tf.get_default_graph()] = self.lph
    super(DropoutSwitchLayer, self).build(input_shape)

  def call(self, inputs, mask=None):
    data_input, training = inputs
    op = self.lph.assign(training[0], use_locking=True)
    # ugly trick here to make the layer work
    data_input = data_input + tf.multiply(tf.cast(op, dtype=tf.float32), 0.0)
    return data_input

  def compute_output_shape(self, input_shape):
    return input_shape[0]


dropout_on = np.array([True], dtype=np.bool)
dropout_off = np.array([False], dtype=np.bool)
input_ph = tf.placeholder(tf.float32, shape=(None, 1))

drop = Input(shape=(), dtype=tf.bool)
input = Input(shape=(1,))
h = DropoutSwitchLayer()([input, drop])
h = Dense(1)(h)
h = Dropout(0.5)(h)
o = Dense(1)(h)
m = Model(inputs=[input, drop], outputs=o)

m1 = m([input_ph, dropout_on])
m2 = m([input_ph, dropout_off])
m3 = m([m2, dropout_on])

sess = tf.Session()
K.set_session(sess)
sess.run(tf.global_variables_initializer())

编辑2:以下是DanielMöller在使用Dropout层时的解决方案,但是如果在LSTM层中使用辍学了怎么办?

input = Input(shape=(1,))
h = Dense(1)(input)
h = RepeatVector(2)(h)
h = LSTM(1, dropout=0.5, recurrent_dropout=0.5)(h)
o = Dense(1)(h)

3 个答案:

答案 0 :(得分:1)

为什么不制作一个连续模型?

#layers
inputs = Input(shape(1,))
dense1 = Dense(1)
dense2 = Dense(1)

#no drop pass:
h = dense1(inputs)
o = dense2(h)
#optionally:
o = Lambda(lambda x: K.stop_gradient(x))(o)

#drop pass:
h = dense1(o)
h = Dropout(.5)(h)
h = dense2(h)

modelOnlyFinalOutput = Model(inputs,h)
modelOnlyNonDrop = Model(inputs,o)
modelBothOutputs = Model(inputs, [o,h])

选择一个进行培训:

model.fit(x_train,y_train) #where y_train = [targets1, targets2] if using both outputs

答案 1 :(得分:1)

事实证明Keras开箱即用地支持我想做的事情。使用 call 中的 training 参数到Dropout / LSTM层,结合DanielMöller的模型构建方法(谢谢!),就可以达到目的。

在下面的代码中(仅是一个玩具示例),o1o3应该相等并且不同于o2

from tensorflow.keras.layers import Dropout, Dense, Input, Lambda, Layer, Add, RepeatVector, LSTM
from tensorflow.python.keras import backend as K
from tensorflow.keras import Model
import tensorflow as tf
import numpy as np

repeat = RepeatVector(2)
lstm = LSTM(1, dropout=0.5, recurrent_dropout=0.5)

#Forward pass with dropout disabled
next_state = tf.placeholder(tf.float32, shape=(None, 1), name='next_state')
h = repeat(next_state)
# Use training to disable dropout
o1 = lstm(h, training=False)
target1 = tf.stop_gradient(o1)

#Forward pass with dropout enabled
state = tf.placeholder(tf.float32, shape=(None, 1), name='state')
h = repeat(state)
o2 = lstm(h, training=True)
target2 = tf.stop_gradient(o2)

#Forward pass with dropout disabled
ph3 = tf.placeholder(tf.float32, shape=(None, 1), name='ph3')
h = repeat(ph3)
o3 = lstm(h, training=False)

loss = target1 + target2 - o3
opt = tf.train.GradientDescentOptimizer(0.1)
train = opt.minimize(loss)

sess = tf.Session()
K.set_session(sess)
sess.run(tf.global_variables_initializer())

data = np.ones((1,1))
sess.run([o1, o2, o3], feed_dict={next_state:data, state:data, ph3:data})

答案 2 :(得分:0)

这个怎么样:

class CustomDropout(tf.keras.layers.Layer):
    def __init__(self):
        super(CustomDropout, self).__init__()
        self.dropout1= Dropout(0.5)
        self.dropout2= Dropout(0.1)

    def call(self, inputs):
       if xxx:
           return self.dropout1(inputs)
       else:
           return self.dropout2(inputs)