在TensorFlow中,如何使用推理模型将伪数据生成到队列中以进行训练并在每个特定步骤同步推理权重?

时间:2019-01-07 22:04:06

标签: python tensorflow asynchronous

例如,我有一个模型类,损失函数的逻辑与get_loss函数几乎相同。

class MyModel(object):
  def get_loss(self):
    x, t = self.iterator.get_next()

    y = self.myNet(x, mode="train")  # very fast
    loss = tf.reduce_sum(tf.abs(t - y))

    new_x = self.myNet(x, mode="predict")  # very slow
    new_x = tf.stop_gradient(new_x)
    new_y = self.myNet(new_x, mode="train")  # very fast
    new_loss = tf.reduce_sum(tf.abs(t - new_y))

    return loss + new_loss

  def myNet(self, x, mode):
    # in train mode, the network architecture can be run in parallel.
    # in infer mode, the network architecture is autoregressive. 
    # however, for simplicity, just return x here
    return x

我想知道是否仍然可以在另一个进程中运行推理模型并将数据连续生成到队列中(例如tensorflow迭代器,我认为它将节省很多时间),那么我可以从< / p>

new_x, new_t = self.infer_model_generator.get_next()
new_y = self.myNet(new_x, mode="train") 
new_loss = tf.reduce_sum(tf.abs(new_t - new_y)) 

同时,每隔几个全局步骤,推理模型的权重就可以与训练模型同步。

或者有其他解决此问题的建议吗?

0 个答案:

没有答案