我目前正在尝试使用tensorflow服务来提供经过培训的&#34; textum &#34;模型。我正在使用 TF 0.11 ,经过一些阅读后,它似乎会自动调用export_meta_graph,它会创建导出的文件 ckpt 和 ckpt.meta 文件。< / p>
在textsum / log_root目录下,我有多个文件。一个是 model.ckpt-230381 ,另一个是 model.ckpt-230381.meta 。
所以我的理解是,这是我在尝试设置服务模型时应该指出的位置。我发布了以下命令:
bazel build //tensorflow_serving/model_servers:tensorflow_model_server
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=model --model_base_path=tf_models/textsum/log_root/
运行上述命令后,我收到以下消息:
w ^ tensorflow_serving /来源/ storage_path / file_system_storage_path_source.cc:204] 在基本路径下找不到可维护模型的版本 tf_models / textsum / log_root /
在检查点文件上运行inspect_checkpoint后,我看到了:
> I tensorflow/stream_executor/dso_loader.cc:111] successfully opened
> CUDA library libcublas.so locally I
> tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA
> library libcudnn.so locally I
> tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA
> library libcufft.so locally I
> tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA
> library libcuda.so.1 locally I
> tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA
> library libcurand.so locally seq2seq/output_projection/w (DT_FLOAT)
> [256,335906] seq2seq/output_projection/v (DT_FLOAT) [335906]
> seq2seq/encoder3/BiRNN/FW/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/encoder3/BiRNN/BW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
> seq2seq/encoder3/BiRNN/BW/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/encoder2/BiRNN/FW/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/decoder/attention_decoder/Linear/Bias (DT_FLOAT) [128]
> seq2seq/decoder/attention_decoder/AttnW_0 (DT_FLOAT) [1,1,512,512]
> seq2seq/decoder/attention_decoder/AttnV_0 (DT_FLOAT) [512]
> seq2seq/encoder0/BiRNN/FW/LSTMCell/W_0 (DT_FLOAT) [384,1024]
> seq2seq/decoder/attention_decoder/LSTMCell/W_0 (DT_FLOAT) [384,1024]
> seq2seq/encoder1/BiRNN/BW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
> global_step (DT_INT32) [] seq2seq/encoder1/BiRNN/BW/LSTMCell/B
> (DT_FLOAT) [1024]
> seq2seq/decoder/attention_decoder/AttnOutputProjection/Linear/Bias
> (DT_FLOAT) [256]
> seq2seq/decoder/attention_decoder/Attention_0/Linear/Matrix (DT_FLOAT)
> [512,512] seq2seq/decoder/attention_decoder/Attention_0/Linear/Bias
> (DT_FLOAT) [512] seq2seq/encoder2/BiRNN/BW/LSTMCell/B (DT_FLOAT)
> [1024] seq2seq/decoder/attention_decoder/Linear/Matrix (DT_FLOAT)
> [640,128]
> seq2seq/decoder/attention_decoder/AttnOutputProjection/Linear/Matrix
> (DT_FLOAT) [768,256] seq2seq/embedding/embedding (DT_FLOAT)
> [335906,128] seq2seq/encoder0/BiRNN/BW/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/encoder3/BiRNN/FW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
> seq2seq/encoder0/BiRNN/BW/LSTMCell/W_0 (DT_FLOAT) [384,1024]
> seq2seq/encoder0/BiRNN/FW/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/decoder/attention_decoder/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/encoder1/BiRNN/FW/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/encoder2/BiRNN/FW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
> seq2seq/encoder1/BiRNN/FW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
> seq2seq/encoder2/BiRNN/BW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
我是否误解了出口需要发生的事情?关于为什么没有找到模型的任何想法?
答案 0 :(得分:0)
即使我仍在努力获取为tensorflow服务导出的textum模型,我的问题似乎是我假设当模型保存上面提到的文件时那些是导出时创建的相同文件模型。基于我在git上收到的答案,这似乎并非如此,事实上我必须在模型本身上运行导出。那时,TF服务应该能够看到模型。