Tensorflow:“GraphDef不能大于2GB。”分配变量后保存模型时出错

时间:2017-02-22 10:10:50

标签: python tensorflow deep-learning word-embedding

我想使用预训练模型来热烈地启动另一个模型,但差别不大。简单地说,我创建了一个新模型,并使用预先训练的模型权重分配具有相同名称的变量。但是,在保存模型时,发生了错误。

Traceback (most recent call last): File "tf_test.py", line 23, in <module> save_path = saver.save(sess, "./model.ckpt") File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1308, in save self.export_meta_graph(meta_graph_filename) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1331, in export_meta_graph graph_def=ops.get_default_graph().as_graph_def(add_shapes=True), File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2268, in as_graph_def result, _ = self._as_graph_def(from_version, add_shapes) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2231, in _as_graph_def raise ValueError("GraphDef cannot be larger than 2GB.") ValueError: GraphDef cannot be larger than 2GB.

示例代码如下:

import tensorflow as tf
import numpy as np

v1 = tf.get_variable("L_enc", [400000, 1024])
v2 = tf.get_variable("L_dec", [400000, 1024])

init_op = tf.initialize_all_variables()

saver = tf.train.Saver(tf.all_variables())

with tf.Session() as sess:
  sess.run(init_op)
  for v in tf.trainable_variables():
    embedding = np.random.uniform(-1, 1, (400000, 1024))
    sess.run(v.assign(embedding))
  # Save the variables to disk.
  save_path = saver.save(sess, "./model.ckpt")
  print("Model saved in file: %s" % save_path)

2 个答案:

答案 0 :(得分:6)

Fabrizio correctly points out对协议缓冲区的大小有2GB的限制,但您可能想知道为什么您的程序达到了这个限制。问题源于这些方面:

$ uname -m
x86_64
$ apt list | grep -Ei '(gcc|llvm|clang).*installed'
clang/testing,unstable,now 1:3.8-34+nmu1 amd64 [installed]
clang-3.8/testing,now 1:3.8.1-17 amd64 [installed,automatic]
gcc/testing,now 4:6.3.0-1 amd64 [installed,automatic]
gcc-4.9/stable,now 4.9.2-10 amd64 [installed,automatic]
gcc-4.9-base/stable,now 4.9.2-10 amd64 [installed,automatic]
gcc-5-base/unstable,now 5.4.1-8 amd64 [installed]
gcc-6/testing,now 6.3.0-6 amd64 [installed,automatic]
gcc-6-base/testing,now 6.3.0-6 amd64 [installed]
gcc-6-doc/testing,testing,unstable,unstable,now 6.1.0-1 all [installed,automatic]
gcc-6-multilib/testing,now 6.3.0-6 amd64 [installed,automatic]
gcc-doc/testing,unstable,now 5:6.1.0-1 amd64 [installed]
gcc-doc-base/testing,testing,unstable,unstable,now 6.1.0-1 all [installed,automatic]
gcc-multilib/testing,now 4:6.3.0-1 amd64 [installed]
lib32gcc-6-dev/testing,now 6.3.0-6 amd64 [installed,automatic]
lib32gcc1/testing,now 1:6.3.0-6 amd64 [installed,automatic]
libclang-common-3.8-dev/testing,now 1:3.8.1-17 amd64 [installed,automatic]
libclang1-3.8/testing,now 1:3.8.1-17 amd64 [installed,automatic]
libgcc-4.9-dev/stable,now 4.9.2-10 amd64 [installed,automatic]
libgcc-5-dev/unstable,now 5.4.1-8 amd64 [installed,automatic]
libgcc-6-dev/testing,now 6.3.0-6 amd64 [installed,automatic]
libgcc1/testing,now 1:6.3.0-6 amd64 [installed]
libllvm3.8/testing,now 1:3.8.1-17 amd64 [installed,automatic]
libllvm3.9/testing,now 1:3.9.1-4 amd64 [installed,automatic]
libx32gcc-6-dev/testing,now 6.3.0-6 amd64 [installed,automatic]
libx32gcc1/testing,now 1:6.3.0-6 amd64 [installed,automatic]
linux-compiler-gcc-6-x86/testing,unstable,now 4.9.13-1 amd64 [installed,automatic]
llvm-3.8/testing,now 1:3.8.1-17 amd64 [installed,automatic]
llvm-3.8-dev/testing,now 1:3.8.1-17 amd64 [installed,automatic]
llvm-3.8-runtime/testing,now 1:3.8.1-17 amd64 [installed,automatic]

当执行命中for v in tf.trainable_variables(): embedding = np.random.uniform(-1, 1, (400000, 1024)) sess.run(v.assign(embedding)) 时,新节点将添加到TensorFlow图中。特别是,每个v.assign(embedding)数组都会转换为tf.constant()张量,这个张量非常大(据估计大约为328MB)。

避免这种情况的最佳方法是使用tf.train.Saver将先前模型中的变量直接加载到新模型中。由于模型可能具有不同的结构,因此您可能需要指定从旧模型中的变量名称到新模型中的embedding对象的映射。

解决问题的另一种方法是预先创建tf.placeholder()操作,为每个变量赋值。这可能需要对您的实际代码进行更多重组,但以下内容对我有用:

tf.Variable

答案 1 :(得分:0)

  

序列化个别张量的硬限制为2GB   因为protobuf中的32位符号大小。

https://github.com/tensorflow/tensorflow/issues/4291