保存并恢复自定义模型张量流

时间:2019-11-17 03:44:25

标签: tensorflow

这是张量流中的一层

def eachLayer(inputX,numberOfHiddenInputs,name,activation=tf.nn.relu):
    with tf.variable_scope(name):
        init = tf.random_normal(shape=(int(inputX.get_shape()[1]),numberOfHiddenInputs))        
        weights = tf.Variable(init,dtype="float32",name="weights")
        biases = tf.Variable(tf.zeros([numberOfHiddenInputs]),dtype='float32',name="biases")
        output=tf.matmul(inputX,weights) + biases
        if activation:
            return activation(output)
        else:
            return output

在此功能中,整个网络的创建方式为

def DNN(X=X,reuse=False):
    with tf.variable_scope("dnn"):
        first_layer = eachLayer(X,hidden_,name="firstLayer")
        second_layer = eachLayer(first_layer,hidden_,name="secondLayer")
        third_layer = eachLayer(second_layer,hidden_,name="thirdLayer")
        output = eachLayer(third_layer,outputSize,name="output",activation=None)
        return output

如何在会话中保存“ dnn”,以后可以将其用于测试目的?我曾尝试保存会话,但是它只能恢复权重和偏差,并且我不想再次构建整个模型来测试任何图像。

with tf.Session() as sess:
    saver = tf.train.Saver()
    global_init = tf.global_variables_initializer()
    sess.run(global_init)
    for _ in range(epoch):
        k=0
        for eachBatch in range(noOfBatch):
            batch_xs,batch_ys = x_train[k:k+batchSize], y_train[k:k+batchSize]
            nth= sess.run(min_loss,feed_dict={X:batch_xs,Y:batch_ys})
            theMinLossVal= mse.eval(feed_dict={X:batch_xs,Y:batch_ys})
            k=k+batchSize         
        print("THE MIN LOSS IS ==> {}".format(theMinLossVal))
    saver.save(sess, "mnistModel.ckpt")

    print("Model Saved")

0 个答案:

没有答案