我在GPU上训练了一个模型,并像这样保存了模型(export_path是我的输出目录)
combineLatest
现在,我正在尝试加载它并运行预测。如果我使用的是GPU,它可以正常工作,但是我周围没有GPU:
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
tensor_info_x = tf.saved_model.utils.build_tensor_info(self.Xph)
tensor_info_y = tf.saved_model.utils.build_tensor_info(self.predprob)
tensor_info_it = tf.saved_model.utils.build_tensor_info(self.istraining)
tensor_info_do = tf.saved_model.utils.build_tensor_info(self.dropout)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'myx': tensor_info_x, 'istraining': tensor_info_it, 'dropout': tensor_info_do},
outputs={'ypred': tensor_info_y},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
builder.add_meta_graph_and_variables(
net, [tf.saved_model.tag_constants.SERVING],
signature_def_map={
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
prediction_signature },)
builder.save()
现在,我了解了有关tf.train.import_meta_graph和clear_device选项的信息,但是我无法完成这项工作。我正在这样加载我的模型:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation 'rnn/while/rnn/multi_rnn_cell/cell_0/cell_0/layer_norm_basic_lstm_cell/dropout/add/Enter': Operation was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device.
此时将引发上述错误。 modelname是pb文件的完整文件名。有没有办法遍历图的节点并手动设置设备(或执行类似操作)? 我正在使用tensorflow 1.8.0
我看到了Can a model trained on gpu used on cpu for inference and vice versa?,我认为我没有在重复。 与该问题的不同之处在于,我想知道培训后要做什么
答案 0 :(得分:2)
我最终用“ clear_devices = True”在我的GPU机器上重新保存了模型,然后将保存的模型移到了仅CPU的机器上。我找不到任何具体的解决方案,所以我在下面发布了我的脚本:
import tensorflow as tf
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, tf.saved_model.tag_constants.SERVING], m)
loaded_graph = tf.get_default_graph()
x = loaded_graph.get_tensor_by_name('myx:0')
dropout = loaded_graph.get_tensor_by_name('mydropout:0')
y = loaded_graph.get_tensor_by_name('myy:0')
export_path = 'somedirectory'
builder = tf.saved_model.builder.SavedModelBuilder(export_path + '/mymodel')
tensor_info_x = tf.saved_model.utils.build_tensor_info(x)
tensor_info_y = tf.saved_model.utils.build_tensor_info(y)
tensor_info_do = tf.saved_model.utils.build_tensor_info(dropout)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'myx': tensor_info_x, 'mydropout': tensor_info_do},
outputs={'myy': tensor_info_y},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
builder.add_meta_graph_and_variables(sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
prediction_signature }, clear_devices=True)
builder.save()