关于tf.layers.batch_normalization的问题

时间:2018-08-08 15:12:16

标签: python tensorflow neural-network deep-learning batch-normalization

我对tf.layers.batch_normalization有一些疑问:这是我的有关批处理规范化的代码的一部分:

layer = tf.layers.dense(self.neighbors_placeholder,1500,activation=tf.nn.leaky_relu,kernel_initializer=tf.contrib.layers.xavier_initializer())
        layer = tf.layers.batch_normalization(layer,training = self.training_placeholder)
        layer=tf.layers.dropout(layer,rate=0.5,training=self.training_placeholder)

并在优化器中更新移动变量:

optimizer=tf.train.AdamOptimizer(lr)
    update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
    with tf.control_dependencies(update_ops):
        self.optimizer = optimizer.minimize(self.cost,global_step=global_step)
    self.saver = tf.train.Saver()

在训练过程中,我将训练设置为“ True”,并在评估结果中将其设置为“ False”:

    feed = {self.centers_placeholder:np.vstack(self.train_centers[index:index+self.batch_size]),
                                self.neighbors_placeholder:np.vstack(self.train_neighbors[index:index+self.batch_size]),
                                self.training_placeholder:True}
     index+=self.batch_size
    _,batch_cost = sess.run([self.optimizer,self.cost],feed_dict = feed)

    feed_metric_test = {self.centers_placeholder:np.vstack(self.test_centers),
                                        self.neighbors_placeholder:np.vstack(self.test_neighbors),self.training_placeholder:False}
   metric_test_step = sess.run(self.loss,feed_dict = feed_metric_test)

存储模型:

self.saver.save(sess,self.store_path+'1.ckpt')

并在推断时恢复模型:

   with tf.Session() as sess:
        self.saver.restore(sess,self.store_path+'1.ckpt')
        embeddings = sess.run(self.embedding,feed_dict = {self.neighbors_placeholder : data,self.training_placeholder:False})

我的代码有什么问题吗?

我的问题是: 1:当我保存和恢复模型并进行推理时,批处理规范化如何工作?那是否保存并恢复了所有移动均值和方差? 2:该动量在tf.layers.batch_normalization中起作用吗?

0 个答案:

没有答案