从NN恢复TensorFlow不起作用

时间:2016-07-13 01:37:12

标签: neural-network tensorflow

我正在努力从张量流中恢复NN的值。我试着按照网上的例子,这是我的代码:

import tensorflow as tf
import numpy as np
import math, random
import matplotlib.pyplot as plt


np.random.seed(1000) # for repro
function_to_learn = lambda x: np.sin(x) + 0.1*np.random.randn(*x.shape)

NUM_HIDDEN_NODES = 2 
NUM_EXAMPLES = 1000 
TRAIN_SPLIT = .8
MINI_BATCH_SIZE = 100 
NUM_EPOCHS = 500  


all_x = np.float32(np.random.uniform(-2*math.pi, 2*math.pi, (1, NUM_EXAMPLES))).T
np.random.shuffle(all_x)
train_size = int(NUM_EXAMPLES*TRAIN_SPLIT)
trainx = all_x[:train_size]
validx = all_x[train_size:]
trainy = function_to_learn(trainx)
validy = function_to_learn(validx)



plt.figure()
plt.scatter(trainx, trainy, c='green', label='train')
plt.scatter(validx, validy, c='red', label='validation')
plt.legend()


X = tf.placeholder(tf.float32, [None, 1], name="X")
Y = tf.placeholder(tf.float32, [None, 1], name="Y")


w_h = tf.Variable(tf.zeros([1, NUM_HIDDEN_NODES],name="w_h"))
b_h = tf.Variable(tf.zeros([1, NUM_HIDDEN_NODES],name="b_h"))
w_o = tf.Variable(tf.zeros([NUM_HIDDEN_NODES,1],name="w_o"))
b_o = tf.Variable(tf.zeros([1, 1],name="b_o"))



def init_weights(shape, init_method='xavier', xavier_params = (None, None)):
    if init_method == 'zeros':
        return tf.Variable(tf.zeros(shape, dtype=tf.float32))
    elif init_method == 'uniform':
        return tf.Variable(tf.random_normal(shape, stddev=0.01, dtype=tf.float32))



def model(X, num_hidden = NUM_HIDDEN_NODES):
    w_h = init_weights([1, num_hidden], 'uniform' )
    b_h = init_weights([1, num_hidden], 'zeros')
    h = tf.nn.sigmoid(tf.matmul(X, w_h) + b_h)

    w_o = init_weights([num_hidden, 1], 'xavier', xavier_params=(num_hidden, 1))
    b_o = init_weights([1, 1], 'zeros')
    return tf.matmul(h, w_o) + b_o



yhat = model(X, NUM_HIDDEN_NODES)

train_op = tf.train.AdamOptimizer().minimize(tf.nn.l2_loss(yhat - Y))


plt.figure()


with tf.Session() as sess:
    sess.run(tf.initialize_all_variables())

    for v in tf.all_variables():
        print v.name



saver = tf.train.Saver()

errors = []

with tf.Session() as sess:
    sess.run(tf.initialize_all_variables())
    for i in range(NUM_EPOCHS):
        for start, end in zip(range(0, len(trainx), MINI_BATCH_SIZE), range(MINI_BATCH_SIZE, len(trainx), MINI_BATCH_SIZE)):
            sess.run(train_op, feed_dict={X: trainx[start:end], Y: trainy[start:end]})

        mse = sess.run(tf.nn.l2_loss(yhat - validy),  feed_dict={X:validx})
        errors.append(mse)
        if i%100 == 0:
            print "epoch %d, validation MSE %g" % (i, mse)
            print sess.run(w_h)
            saver.save(sess,"/Python/tensorflow/res/save_net.ckpt", global_step = i)



    print " ******* AFTR *******"
    for v in tf.all_variables():
        print v.name
    plt.plot(errors)
    plt.xlabel('#epochs')
    plt.ylabel('MSE')

*******获取恢复值,我试过:**

import tensorflow as tf
import numpy as np
import math, random
import matplotlib.pyplot as plt


NUM_HIDDEN_NODES = 2 



#SECOND PART TO GET THE STORED VALUES

w_h = tf.Variable(np.arange(NUM_HIDDEN_NODES).reshape(1, NUM_HIDDEN_NODES), dtype=tf.float32, name='w_h')
b_h = tf.Variable(np.arange(NUM_HIDDEN_NODES).reshape(1, NUM_HIDDEN_NODES), dtype=tf.float32, name='b_h')

w_o = tf.Variable(np.arange(NUM_HIDDEN_NODES).reshape(NUM_HIDDEN_NODES, 1), dtype=tf.float32, name='w_o')
b_o = tf.Variable(np.arange(1).reshape(1, 1), dtype=tf.float32, name='b_o')



saver = tf.train.Saver()
with tf.Session() as sess:
    ckpt = tf.train.get_checkpoint_state("/Python/tensorflow/res/")
    if ckpt and ckpt.model_checkpoint_path:
        # Restores from checkpoint
        saver.restore(sess, "/Python/tensorflow/res/save_net.ckpt-400")
        print "Model loaded"
    else:
        print "No checkpoint file found"

    print("weights:", sess.run(w_h))
    print("biases:", sess.run(b_h))

非常感谢您的帮助,我几乎放弃了这一点。

再次感谢

2 个答案:

答案 0 :(得分:2)

您想要从中恢复变量的检查点文件似乎与现有代码的当前变量/形状不同。

保存:(如果用上面定义中的常量替换它)

w_h = tf.Variable(tf.zeros([1, 5],name="w_h"))
b_h = tf.Variable(tf.zeros([1, 5],name="b_h"))
w_o = tf.Variable(tf.zeros([5,1],name="w_o"))
b_o = tf.Variable(tf.zeros([1, 1],name="b_o"))

还原:

w_h = tf.Variable(np.arange(10).reshape(1, 10), dtype=tf.float32, name='w_h')
b_h = tf.Variable(np.arange(10).reshape(1, 10), dtype=tf.float32, name='b_h')

w_o = tf.Variable(np.arange(10).reshape(10, 1), dtype=tf.float32, name='w_o')
b_o = tf.Variable(np.arange(1).reshape(1, 1), dtype=tf.float32, name='b_o')

为了防止出现这些类型的问题,请尝试使用函数进行训练和推理,以便所有代码都具有相同的变量和常量。

答案 1 :(得分:0)

您正在创建两组权重,一次是全局权限,第二次是在调用init_weights时。第二组变量是经过优化的变量,但两组都已保存。

在您的eval代码中,您创建了一组这样的变量,因此您的恢复仅恢复第一个集合,该集合在初始化后尚未修改。

解决方案是要么分解模型创建代码,以便在训练期间和eval期间创建完全相同的图形,或者使用meta_graph,它将在恢复期间重新创建图形结构。