使用常量初始化时,Tensorflow使用太多内存

时间:2017-10-16 15:43:46

标签: python numpy memory-management tensorflow

我最近发现了一个奇怪的事情,Tensorflow在使用常量初始化变量时似乎使用了太多内存。有人可以帮我理解下面的例子吗?

$ python -m memory_profiler test.py 
[0 1 2 3 4 5 6 7 8 9]
Filename: test.py

Line #    Mem usage    Increment   Line Contents
================================================
 4  144.531 MiB    0.000 MiB   @profile
 5                             def go():
 6  907.312 MiB  762.781 MiB    a = np.arange(100000000)
 7  910.980 MiB    3.668 MiB    s = tf.Session()
 8 1674.133 MiB  763.152 MiB    b = tf.Variable(a)
 9 3963.000 MiB 2288.867 MiB    s.run(tf.variables_initializer([b]))
10 3963.145 MiB    0.145 MiB    print(s.run(b)[:10])

1 个答案:

答案 0 :(得分:6)

  • 你有800MB存储在numpy数组中。
  • tf.Variable(a)相当于tf.Variable(tf.constant(a))。要创建此常量,Python客户端会在Python运行时中为Graph对象添加900MB常量
  • Session.run触发TF_ExtendGraph,它将图形传输到TensorFlow C运行时,另一个900MB
  • 会话为TensorFlow运行时中的var currentUpperColor = UIColor.black var currentLowerColor = UIColor.white let gradientLayer = CAGradientLayer() override func viewDidLoad() { super.viewDidLoad() setGradientBackground(lowerColor: currentLowerColor, upperColor: currentUpperColor) } func setGradientBackground(lowerColor: UIColor, upperColor: UIColor) { gradientLayer.frame = view.bounds gradientLayer.colors = [lowerColor.cgColor, upperColor.cgColor] gradientLayer.locations = [0.5, 1.0] gradientLayer.startPoint = CGPoint(x: 0.0, y: 1.0) gradientLayer.endPoint = CGPoint(x: 0.0, y: 0.0) view.layer.insertSublayer(gradientLayer, at: 0) } @IBAction func upperColorPicked(_ sender: UIButton) { currentUpperColor = sender.backgroundColor! gradientLayer.colors = [currentLowerColor.cgColor, currentUpperColor.cgColor] } @IBAction func lowerColorPicked(_ sender: UIButton) { currentLowerColor = sender.backgroundColor! gradientLayer.colors = [currentLowerColor.cgColor, currentUpperColor.cgColor] } b对象分配900MB

这使得3600MB的内存分配。为了节省内存,你可以做这样的事情

tf.Variable

TLDR;避免创建大常量。