如何保存和恢复卷积自动编码器神经网络模型

时间:2019-01-25 03:10:51

标签: python-3.x

我使用卷积自动编码器神经网络方法训练我的模型,然后保存它,但是当我恢复模型以重建与训练图像相似的图像时,重建结果非常糟糕,损失很大。我不确定保存和读取文件是否出错。

训练模型并保存!

#--------------------------------------------------------------------------
x = tf.placeholder(tf.float32, [None, dim], name = "X")
y = tf.placeholder(tf.float32, [None, dim], name = "Y")
keepprob = tf.placeholder(tf.float32, name = "K")
pred = cae(x, weights, biases, keepprob, imgsize)["out"]       
cost = tf.reduce_sum(tf.square(cae(x, weights, biases, keepprob,imgsize)["out"] - tf.reshape(y, shape=[-1, imgsize, imgsize, 1])))      
learning_rate = 0.01    
optm = tf.train.AdamOptimizer(learning_rate).minimize(cost)   
#--------------------------------------------------------------------------
sess = tf.Session()
save_model = os.path.join(PATH,'temp_saved_model')
saver      = tf.train.Saver()           
tf.add_to_collection("COST",  cost)
tf.add_to_collection("PRED",  pred)    

sess.run(tf.global_variables_initializer())           
mean_img = np.zeros((dim))

batch_size = 100
n_epochs   = 1000   

for epoch_i in range(n_epochs):

    for batch_i in range(ntrain // batch_size):                              
        trainbatch = np.array(train)                    
        trainbatch = np.array([img - mean_img for img in trainbatch])           
        sess.run(optm, feed_dict={x: trainbatch, y: trainbatch, keepprob: 1.})       

save_path = saver.save(sess, save_model)
print('Model saved in file: %s' %save_path)    
sess.close()

还原模型并尝试重建图像。

tf.reset_default_graph()
save_model = os.path.join(PATH + 'SaveModel/','temp_saved_model.meta')
imgsize  = 64
dim      = imgsize * imgsize
mean_img = np.zeros((dim))   

with tf.Session() as sess:
    saver  = tf.train.import_meta_graph(save_model)
    saver.restore(sess, tf.train.latest_checkpoint(PATH + 'SaveModel/'))         

    cost  = tf.get_collection("COST")[0]
    pred  = tf.get_collection("PRED")[0]       

    graph = tf.get_default_graph()
    x = graph.get_tensor_by_name("X:0")
    y = graph.get_tensor_by_name("Y:0")
    k = graph.get_tensor_by_name("K:0")        

    for i in range(10):           
        test_xs = np.array(data)             
        test    = load_image(test_xs, imgsize)
        test    = np.array([img - mean_img for img in test])       

    print ("[%02d/%02d] cost: %.4f" % (i, 10, sess.run(cost, feed_dict={x: test, y: test, K: 1.})))

训练过程中的损失值为1.321 ...,但是重建损失为16545.10441 ...我的代码有问题吗?

1 个答案:

答案 0 :(得分:0)

首先请确保“还原”和“保存”功能位于不同的文件中。

到目前为止,我已经调试了一些问题,

    恢复后构建图形时,
  1. keepprob从'K'变为'k'。
  2. 您面临与Logits和标签相同的图像(直到您尝试学习Identity函数才有意义)
  3. 您要在保存模型之前计算培训成本,并在还原模型后计算验证/测试成本。

您在保护程序中的代码

recon = sess.run(pred, feed_dict={x: testbatch, keepprob: 1.})          

        fig, axs = plt.subplots(2, n_examples, figsize=(15, 4))
        for example_i in range(5):
            axs[0][example_i].matshow(np.reshape(testbatch[example_i, :], (imgsize, imgsize)), cmap=plt.get_cmap('gray'))
            axs[1][example_i].matshow(np.reshape(np.reshape(recon[example_i, ...], (dim,)) + mean_img, (imgsize, imgsize)), cmap=plt.get_cmap('gray'))
        plt.show()

您在还原功能中的代码

        recon = sess.run(pred, feed_dict={x: test, k: 1.})            
        cost  = sess.run(cost, feed_dict={x: test, y: test, k: 1.})

        if (i % 2) == 0:
            fig, axs = plt.subplots(2, n_examples, figsize=(15, 4))
            for example_i in range(n_examples):
                axs[0][example_i].matshow(np.reshape(test[example_i, :], (imgsize, imgsize)), cmap=plt.get_cmap('gray'))
                axs[1][example_i].matshow(np.reshape(np.reshape(recon[example_i, ...], (dim,)) + mean_img, (imgsize, imgsize)), cmap=plt.get_cmap('gray'))
            plt.show()

即使在您正在绘制recon变量的恢复模块中,代码中的打印/绘图成本也无处不在

如果您要测试自动编码器/解码器对以生成原始图像,则您的模型有点太小(浅),如果可以,请尝试实现它,如果感到困惑,请查看此链接。 https://pgaleone.eu/neural-networks/deep-learning/2016/12/13/convolutional-autoencoders-in-tensorflow/

在任何情况下,请随时添加评论以进一步阐明。