为什么不在TensorFlow中分配内存而不是错误?

时间:2018-03-31 22:03:13

标签: tensorflow

有时人们会在TensorFlow中以下列精神看到警告:

W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this
 is not a failure, but may mean that there could be performance gains if more memory were available.

此类警告的来源是什么?为什么它不会产生错误?

1 个答案:

答案 0 :(得分:3)

TensorFlow code from where the error originates,如第206行所示:

void* BFCAllocator::AllocateRaw(size_t unused_alignment, size_t num_bytes,
                            const AllocationAttributes& allocation_attr) {
  if (allocation_attr.no_retry_on_failure) {
    // Return immediately upon the first failure if this is for allocating an
    // optional scratch space.
etc.

根据注释,可以在no_retry_on_failure标志设置为高(如您的情况下)的情况下调用分配器函数,因为它只是尝试分配一个可选的"暂存空间",这可能大概可以帮助一些操作更快,但如果需要,TensorFlow可以没有它。