Tensorflow 2 + Keras带来的知识蒸馏损失

时间:2019-12-02 11:22:42

标签: python deep-learning tensorflow2.0 tf.keras

我正在尝试实现一个非常简单的keras模型,该模型使用另一个模型中的Knowledge Distillation [1]。 粗略地说,我需要用L(y_true, y_pred)替换原始损失L(y_true, y_pred)+L(y_teacher_pred, y_pred),其中y_teacher_pred是另一个模型的预测。

我试图做

def create_student_model_with_distillation(teacher_model):

  inp = tf.keras.layers.Input(shape=(21,))

  model = tf.keras.models.Sequential()
  model.add(inp)

  model.add(...) 
  model.add(tf.keras.layers.Dense(units=1))

  teacher_pred = teacher_model(inp)

  def my_loss(y_true,y_pred):
      loss = tf.keras.losses.mean_squared_error(y_true, y_pred)
      loss += tf.keras.losses.mean_squared_error(teacher_pred, y_pred)
      return loss

  model.compile(loss=my_loss, optimizer='adam')

  return model

但是,当我尝试在模型上调用fit时,我会得到

TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.

我该如何解决这个问题?

参考

[1] https://arxiv.org/abs/1503.02531

1 个答案:

答案 0 :(得分:1)

实际上,这篇博文回答了您的问题:keras blog

但简而言之 - 您应该使用新的 TF2 API 并在 predict 块之前调用教师的 tf.GradientTape()

def train_step(self, data):
        # Unpack data
        x, y = data

        # Forward pass of teacher
        teacher_predictions = self.teacher(x, training=False)

        with tf.GradientTape() as tape:
            # Forward pass of student
            student_predictions = self.student(x, training=True)

            # Compute losses
            student_loss = self.student_loss_fn(y, student_predictions)
            distillation_loss = self.distillation_loss_fn(
                tf.nn.softmax(teacher_predictions / self.temperature, axis=1),
                tf.nn.softmax(student_predictions / self.temperature, axis=1),
            )
            loss = self.alpha * student_loss + (1 - self.alpha) * distillation_loss