我使用Tensorflow创建了一个简单的3层MNIST模型,并尝试将模型保存为.pb文件,以便我可以提取最终图层的权重和隐藏权重。但我无法获得最终的隐藏层和权重。请指导我如何保存重量。
我在代码中添加了以下行:
with tf.Graph().as_default():
saver = tf.train.Saver()
sess = tf.Session()
saver.restore(sess,'/tmp/model.ckpt"')
但获得ValueError: No variables to save
。我到底需要添加以上内容?
from __future__ import print_function
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=False)
import tensorflow as tf
path_to_model_pb="/u"
# Parameters
learning_rate = 0.1
num_steps = 20
batch_size = 128
display_step = 100
#saver = tf.train.Saver()
# Network Parameters
n_hidden_1 = 15 # 1st layer number of neurons
#n_hidden_2 = 256 # 2nd layer number of neurons
num_input = 784 # MNIST data input (img shape: 28*28)
num_classes = 10 # MNIST total classes (0-9 digits)
# Define the neural network
def neural_net(x_dict):
# TF Estimator input is a dict, in case of multiple inputs
x = x_dict['images']
# Hidden fully connected layer with 256 neurons
layer_1 = tf.layers.dense(x, n_hidden_1)
# Hidden fully connected layer with 256 neurons
#layer_2 = tf.layers.dense(layer_1, n_hidden_2)
# Output fully connected layer with a neuron for each class
out_layer = tf.layers.dense(layer_1, num_classes)
return out_layer
# Define the model function (following TF Estimator Template)
def model_fn(features, labels, mode):
# Build the neural network
logits = neural_net(features)
# Predictions
pred_classes = tf.argmax(logits, axis=1)
pred_probas = tf.nn.softmax(logits)
# If prediction mode, early return
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode, predictions=pred_classes)
# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=tf.cast(labels, dtype=tf.int32)))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op,
global_step=tf.train.get_global_step())
# Evaluate the accuracy of the model
acc_op = tf.metrics.accuracy(labels=labels, predictions=pred_classes)
# TF Estimators requires to return a EstimatorSpec, that specify
# the different ops for training, evaluating, ...
estim_specs = tf.estimator.EstimatorSpec(
mode=mode,
predictions=pred_classes,
loss=loss_op,
train_op=train_op,
eval_metric_ops={'accuracy': acc_op})
saver = tf.train.Saver()
return estim_specs
# Build the Estimator
model = tf.estimator.Estimator(model_fn)
# Define the input function for training
input_fn = tf.estimator.inputs.numpy_input_fn(
x={'images': mnist.train.images}, y=mnist.train.labels,
batch_size=batch_size, num_epochs=None, shuffle=True)
# Train the Model
model.train(input_fn, steps=num_steps)
# Evaluate the Model
# Define the input function for evaluating
with tf.Graph().as_default():
saver = tf.train.Saver()
sess = tf.Session()
input_fn = tf.estimator.inputs.numpy_input_fn(
x={'images': mnist.test.images}, y=mnist.test.labels,
batch_size=batch_size, shuffle=False)
# Use the Estimator 'evaluate' method
e = model.evaluate(input_fn)
print("Testing Accuracy:", e['accuracy'])
saver.restore(sess,'/tmp/model.ckpt"')
# save_path = saver.save(sess, "/tmp/model.ckpt")
tf.train.write_graph(tf.get_default_graph(), path_to_model_pb,
'saved_model.pb', as_text=False)
答案 0 :(得分:0)
您尚未在您所使用的图表中构建您的网络。Estimator
使用自己的图表,并应为您处理会话管理。如果您希望模型保存权重,请将model_dir
传递给构造函数。
model = tf.estimator.Estimator(model_fn, '/tmp/my_model_dir')
model.train(train_input_fn, steps=num_steps)
e = model.evaluate(eval_input_fn)
如果您想在估算器框架之外对网络进行处理,只需调用model_fn
并从传递给估算器的model_dir
加载变量。例如,要根据一些numpy feature_data
生成推断:
graph = tf.Graph()
with graph.as_default():
features = tf.placeholder(shape=feature_shape, dtype=tf.float32)
labels = None
mode = 'predict'
spec = model_fn(features, labels, mode)
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('/tmp/my_model_dir'))
sess.run(spec.inference, feed_dict={features: feature_data})