我使用tf.data.dataset将数据导入模型。我创建了一个简单的可重现代码来展示这个想法。我保存训练好的模型(请参考下面的代码),一旦我恢复模型在测试数据上运行它,我得到错误,迭代器尚未初始化。有关详细信息,请参阅以下错误:
FailedPreconditionError (see above for traceback): GetNext() failed
because the iterator has not been initialized. Ensure that you have
run the initializer operation for this iterator before getting the
next element.
[[Node: IteratorGetNext = IteratorGetNext[output_shapes=[[?,10],
[?,1]], output_types=[DT_FLOAT, DT_FLOAT],
_device="/job:localhost/replica:0/task:0/device:CPU:0"](Iterator)]]
[[Node: IteratorGetNext/_39 = _Recv[client_terminated=false,
recv_device="/job:localhost/replica:0/task:0/device:GPU:0",
send_device="/job:localhost/replica:0/task:0/device:CPU:0",
send_device_incarnation=1, tensor_name="edge_7_IteratorGetNext",
tensor_type=DT_FLOAT,
_device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
我该如何解决这个问题?这是可重现的代码:
import tensorflow as tf
import os
import numpy as np
import math
features=np.random.randn(100,10)
features_test=np.random.randn(10,10)
y=np.random.randn(100,1)
y_test=np.random.randn(10,1)
feature_size=features.shape[1]
state_size=5
learning_rate=0.001
graph = tf.Graph()
with graph.as_default():
batch_size_tensor = tf.placeholder(tf.int64,name="Batch_tensor")
X,Y = tf.placeholder(tf.float32,
[None,feature_size],"X"),tf.placeholder(tf.float32,[None,1],name="Y")
dataset =tf.data.Dataset.from_tensor_slices((X,Y)).batch(batch_size_tensor).repeat()
iter = dataset.make_initializable_iterator()
x_inputs,y_outputs = iter.get_next()
Wx = tf.Variable(tf.truncated_normal([feature_size, state_size], stddev=2.0 / math.sqrt(state_size)),name="Visual_weights_layer1")
bx= tf.Variable(tf.zeros([state_size]),name="Visual_bias_layer1")
x_hidden_state=tf.matmul(x_inputs, Wx)+bx
x_hidden_state = tf.contrib.layers.batch_norm(x_hidden_state, epsilon=1e-5)
vx=tf.nn.relu(x_hidden_state)
W_final = tf.Variable(tf.truncated_normal([state_size, 1], stddev=2.0 / math.sqrt(state_size)),name="FinalLayer_weights")
by=tf.Variable(tf.zeros([1]),name="FinalLayer_bias")
predictions = tf.add(tf.matmul(vx, W_final), by,name="preds")
loss = tf.losses.mean_squared_error(y_outputs,predictions)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
num_steps=100
batch_size=1
saver_path_model='tmp/testmodel'
export_path_model='tmp/testmodel.meta'
with tf.Session(graph=graph) as sess:
sess.run(init)
sess.run(iter.initializer, feed_dict={X: features, Y: y,
batch_size_tensor: batch_size})
print('initialized.')
for step in range(num_steps):
_, loss_val = sess.run([optimizer, loss])
print (loss_val)
saver.save(sess, saver_path_model)
saver.export_meta_graph(filename=export_path_model)
sess = tf.Session()
new_saver = tf.train.import_meta_graph(export_path_model)
new_saver.restore(sess, saver_path_model)
graph = tf.get_default_graph()
feed = {"X:0": features_test,"Y:0": y_test}
predictions_test = sess.run(["preds:0"], feed_dict=feed)
答案 0 :(得分:0)
我保存了我的模型如下
if (getTokenWithPostMethod() == true)
{
lblResult.text = "yes, we get token: " + g_token
}
else
{
lblResult.text = "there is an error, please try again later"
}
然后加载它
saver = tf.train.Saver()
with tf.Session() as session:
session.run(tf.global_variables_initializer())
...
# after all training
save_path = saver.save(session, "logs/trained_model.ckpt")
print("Model saved: {}".format(save_path))
官方文档有更多的例子 https://www.tensorflow.org/api_docs/python/tf/train/Saver