Saver.save的折叠速度越来越慢

时间:2019-02-02 01:15:32

标签: python tensorflow time

我正在使用tensorflow,并且已经开发了一个深层多层前馈模型。为了确保模型的性能,我决定在10倍交叉验证中使用它。在每一次折叠中,我都会创建一个新的神经网络实例,称为训练函数和预测函数。

在每个折余调用以下代码:

for each fold:
         nn= ffNN(hidden_nodes, epochs, learning_rate, saveFrequency, save_path, decay, decay_step, decay_factor, stop_loss, keep_probability, regularization_factor,minimum_cost,activation_function,batch_size,shuffle,stopping_iteration)
         nn.initialize(x_size)
         nn.train(X,y)
         nn.predict(X_test)

ffNN文件中,我具有如下的初始化以及训练和预测功能:

nn.train:

sess = tf.InteractiveSession()
init = tf.global_variables_initializer()
sess.run(init)
saver = tf.train.Saver()
for each epoch:
       for each batch:
          _ , loss = session.run([self.optimizer,self.loss],feed_dict={self.X:X1, self.y:y})
       if epoch % save_frequency == 0:
            saver.save(session,save_path)
sess.close()

问题出在saver.save中,每一折保存的时间越来越长。尽管我是从头开始创建所有变量的,但我不知道是什么使它依赖于折叠并导致保存时间越来越长。

谢谢。

编辑:

用于构建模型nn.initialize的代码如下:

 self.X = tf.placeholder("float", shape=[None, x_size], name='XValue')
 self.y = tf.placeholder("float", shape=[None, y_size], name='yValue')
 with tf.variable_scope("initialization", reuse=tf.AUTO_REUSE):
    w_in, b_in = init_weights((x_size, self.hidden_nodes))
    h_out = self.forwardprop(self.X, w_in, b_in, self.keep_prob,self.activation_function)
    l2_norm = tf.add(tf.nn.l2_loss(w_in), tf.nn.l2_loss(b_in))
    w_out, b_out = init_weights((self.hidden_nodes, y_size))
    l2_norm = tf.add(tf.nn.l2_loss(w_out), l2_norm)
    l2_norm = tf.add(tf.nn.l2_loss(b_out), l2_norm)
    self.yhat = tf.add(tf.matmul(h_out, w_out), b_out)
    self.mse = tf.losses.mean_squared_error(labels=self.y, predictions=self.yhat)
    self.loss = tf.add(self.mse,self.regularization_factor * l2_norm)
    self.optimizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate).minimize(self.loss)

1 个答案:

答案 0 :(得分:0)

根据您在问题中描述的内容,问题不在saver.save中,而是计算图越来越大。因此,节省时间更多。确保以以下方式构造代码:

for each fold:
    # Clear the previous computational graph
    tf.reset_default_graph()
    # Then build the graph
    nn = ffNN()
    # Create the saver
    saver = tf.train.Saver()
    # Create a session
    with tf.Session() as sess:
        # Initialize the variables in the graph
        sess.run(tf.global_variables_initializer())
        # Train the model
        for each epoch:
            for each batch:
                nn.train_on_batch()
            if epoch % save_frequency == 0:
                saver.save(sess,save_path)