我将DNNLinearCombinedClassifier与修改后的Transformer输出的部分输入一起使用。该模型使用input_fn进行训练,该模型从批处理的数据集中获取数据。
保存模型时(如下所示),它总是从训练输入和输出张量中选取batch_size(以128为例)。结果,在仅预测一条记录的推理过程中,出现错误: ValueError:无法为形状为“(128,)”的张量u'input_fn / xxxx:0'输入形状(1,)的值
这是模型保存功能:
def export_model():
with ops.Graph().as_default() as graph:
chk_pt_path = saver.latest_checkpoint(output_path)
with tf_session.Session() as session:
saver_for_restore = tf.train.import_meta_graph(chk_pt_path + '.meta')
saver_for_restore.restore(session, chk_pt_path)
input_tensors = [graph.get_tensor_by_name(x) for x in input_tenstor_names]
input_tensor_infos = [tf.saved_model.utils.build_tensor_info(x) for x in input_tensors]
inputs = dict(zip(input_tenstor_names, input_tensor_infos))
output_tensor = graph.get_tensor_by_name(output_tensor_name)
output_tensor_info = tf.saved_model.utils.build_tensor_info( output_tensor)
prediction_signature = ( tf.saved_model.signature_def_utils.build_signature_def(inputs=inputs, outputs={'prediction': output_tensor_info}, method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
builder = saved_model_builder.SavedModelBuilder(output_path))
builder.add_meta_graph_and_variables(
session, [tf.saved_model.tag_constants.SERVING],
signature_def_map={ tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
prediction_signature,})
builder.save()
即使我将张量形状(1,)传递给输入和输出TensorInfo对象,保存的模型仍然具有输入和输出张量大小(train_batch_size,)。唯一的解决方法是再次以批量大小= 1训练模型,这非常不方便。如何保存输入和输出张量大小为1的模型?