我可以在TensorFlow自定义图层中添加if ... else吗?

时间:2019-03-27 05:08:02

标签: python tensorflow machine-learning deep-learning

我想用if ... else循环组成自定义图层,然后针对一些特殊图层权重计算梯度。该怎么做?

我们可以使用以下方法(https://www.tensorflow.org/tutorials/eager/custom_layers)组成自定义图层

class ResnetIdentityBlock(tf.keras.Model):
  def __init__(self, kernel_size, filters):
    super(ResnetIdentityBlock, self).__init__(name='')
    filters1, filters2, filters3 = filters

    self.conv2a = tf.keras.layers.Conv2D(filters1, (1, 1))
    self.bn2a = tf.keras.layers.BatchNormalization()

    self.conv2b = tf.keras.layers.Conv2D(filters2, kernel_size, padding='same')
    self.bn2b = tf.keras.layers.BatchNormalization()



  def call(self, input_tensor, training=False):
    x = self.conv2a(input_tensor)
    x = self.bn2a(x, training=training)
    x = tf.nn.relu(x)

    x = self.conv2b(x)
    x = self.bn2b(x, training=training)
    x = tf.nn.relu(x)   

    x += input_tensor
    return tf.nn.relu(x)

我可以使用以下方法吗?

class ResnetIdentityBlock(tf.keras.Model):
  def __init__(self, kernel_size, filters):
    super(ResnetIdentityBlock, self).__init__(name='')
    filters1, filters2, filters3 = filters

    if trk=='rnn' and req==True:
        self.conv2a = tf.keras.layers.Conv2D(filters1, (1, 1))
        self.bn2a = tf.keras.layers.BatchNormalization()
    if trk=!'rnn' and inf==True:
        self.conv2b = tf.keras.layers.Conv2D(filters2, kernel_size, padding='same')
        self.bn2b = tf.keras.layers.BatchNormalization()



  def call(self, input_tensor, training=False):
    if trk=='rnn' and req==True:
        x = self.conv2a(input_tensor)
        x = self.bn2a(x, training=training)
        x = tf.nn.relu(x)
    if trk=!'rnn' and inf==True:
        x = self.conv2b(x)
        x = self.bn2b(x, training=training)
        x = tf.nn.relu(x)   

    x += input_tensor
    return tf.nn.relu(x)

如果可以的话,如何计算一些特殊图层权重的梯度? self.conv2b?如果要使用以下方法计算梯度,如何更改model.trainable_variables?

model=ResnetIdentityBlock()

def grad(model, inputs, targets):
  with tf.GradientTape() as tape:
    loss_value = loss(model, inputs, targets)
  return loss_value, tape.gradient(loss_value, model.trainable_variables)

0 个答案:

没有答案