在GPU执行之前,GPU上的BFCAllocator是否真的支持主机端释放?

时间:2019-04-08 15:08:33

标签: c++ tensorflow tensorflow-xla

当我在TF中阅读XLA的代码时,遇到了以下代码片段。

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/jit/xla_launch_util.h#L119

// Adapter class that wraps a Tensorflow allocator as an XLA allocator.
// Assumes that the Tensorflow allocator permits asynchronous deallocation:
// see comment on `AllowsAsynchronousDeallocation()`.
class XlaAllocator : public xla::DeviceMemoryAllocator {
 public:
  XlaAllocator(const se::Platform* platform, Allocator* wrapped);
  ~XlaAllocator() override;
  xla::StatusOr<xla::OwningDeviceMemory> Allocate(
      int device_ordinal, uint64 size, bool retry_on_failure) override;
  Status Deallocate(int device_ordinal, se::DeviceMemoryBase mem) override;

  // The Tensorflow BFC allocator used on GPU allows host-side deallocation
  // before GPU execution takes place. Tensorflow uses the ordering of the main
  // compute stream to enforce a happens-before relationship between a memory
  // allocation and code that reuses the same memory. If Tensorflow adds
  // support for multiple GPU streams or allocators with different ordering
  // requirements, this code may need to change.
  // (This attribute has no effect on CPU.)
  bool AllowsAsynchronousDeallocation() const override { return true; }

 private:
  Allocator* wrapped_;
};

如评论中所示,它表示GPU上使用的Tensorflow BFC分配器允许在GPU执行之前进行主机端的重新分配。

这真的让我感到困惑。深入研究BFCAllocator的代码后,我发现没有任何东西可以支持它。

所以我的问题是:

  • Tensorflow BFC分配器是否真的支持从主机端进行异步释放?
  • 如果是真的,那我想念什么?

谢谢!

0 个答案:

没有答案