冻结图后的Tensorflow OOM

时间:2017-03-07 09:35:24

标签: tensorflow

我使用tf运行seq2seq模型,使用tf.train.Saver从检查点文件加载参数时,推理程序运行良好。但是在使用freeze_graph.py导出图表(使用tf.framework.graph_util.convert_variables_to_constants())并在推理程序中使用tf.import_graph_def导入后,它会出现OOM问题。

以下是错误日志的一部分:

W tensorflow/core/common_runtime/bfc_allocator.cc:274] ****************************************************************************************************
W tensorflow/core/common_runtime/bfc_allocator.cc:275] Ran out of memory trying to allocate 4.0KiB.  See logs for memory state.
W tensorflow/core/framework/op_kernel.cc:983] Internal: Dst tensor is not initialized.
E tensorflow/core/common_runtime/executor.cc:594] Executor failed to create kernel. Internal: Dst tensor is not initialized.
     [[Node: embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/AttnV_0 = Const[dtype=DT_FLOAT, value=Tensor<type: float shape: [1024] values: -0.016628871 -0.2054652 -0.045054652...>, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]
Traceback (most recent call last):
  File "inference.py", line 88, in console_main
    result = list(inference(source_sentence))
  File "inference.py", line 54, in inference
    for sequence in result:
  File "/data/experiment/decoder.py", line 115, in search_best_sequence
    State.batch_predict(self.session, self.model, self.context, beam)
  File "/data/experiment/decoder.py", line 82, in batch_predict
    state_list[0].depth)
  File "/data/experiment/seq2seq_model.py", line 452, in batch_feed_decoder
    log_softmax, attns, state = session.run(output_fetch, input_feed)
  File "/home/.conda/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 767, in run
    run_metadata_ptr)
  File "/home/.conda/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 966, in _run
    feed_dict_string, options, run_metadata)
  File "/home/.conda/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1016, in _do_run
    target_list, options, run_metadata)
  File "/home/.conda/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1036, in _do_call
    raise type(e)(node_def, op, message)
InternalError: Dst tensor is not initialized.
     [[Node: embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/AttnV_0 = Const[dtype=DT_FLOAT, value=Tensor<type: float shape: [1024] values: -0.016628871 -0.2054652 -0.045054652...>, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]

Caused by op u'embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/AttnV_0', defined at:
  File "inference.py", line 169, in <module>
    tf.app.run()
  File "/home/.conda/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 44, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "inference.py", line 165, in main
    console_main(session)
  File "inference.py", line 66, in console_main
    model = create_model(session, False)
  File "/data/experiment/model.py", line 145, in create_model
    tensor_name_pickle=tensor_name_pickle)
  File "/data/experiment/seq2seq_model.py", line 106, in __init__
    tf.import_graph_def(graph_def, name="")
  File "/home/.conda/lib/python2.7/site-packages/tensorflow/python/framework/importer.py", line 287, in import_graph_def
    op_def=op_def)
  File "/home/.conda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2395, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/.conda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1264, in __init__
    self._traceback = _extract_stack()

InternalError (see above for traceback): Dst tensor is not initialized.
     [[Node: embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/AttnV_0 = Const[dtype=DT_FLOAT, value=Tensor<type: float shape: [1024] values: -0.016628871 -0.2054652 -0.045054652...>, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]

我认为这可能是由tf.Constant的内存问题引起的。有人有这个问题的经验吗?

1 个答案:

答案 0 :(得分:0)

我遇到了同样的问题,但是当尝试使用C API从C ++应用程序加载并运行推理时。经过大量的搅动和测试之后,罪魁祸首似乎是冻结的图形和freeze_graph.py本身。这可能是某种错误。 github的TF回购上实际上有多个问题报告,但是由于缺乏活动而被关闭了,例如herehere。我猜想模型冻结的明显错误没有任何优先级。

在我的情况下,模型.pb文件约为500mb,并且在运行会话时占用了大约10Gb的RAM。它不仅占用了大量的RAM,而且实际上还慢了几个数量级。

当我切换到仅加载SavedModel目录时,一切正常。我不确定如何在python中实现这一点,但是对于C代码,我用TF_GraphImportGraphDef()替换了TF_LoadSessionFromSavedModel()调用。

我使用了TF v1.14.0。该库由我用Bazel构建,而不是普通版本。如果有人感兴趣,我可以在这里和那里提供一些详细信息。只是不确定从哪里开始,我经历了很多试验和错误。