Tensorflow:使用tf.image和keras进行数据扩充时,引发ValueError(" GraphDef不能大于2GB。")

时间:2017-04-27 01:42:09

标签: python tensorflow keras

我正在使用TensorFlowkeras.preprocessing.image.ImageDataGenerator()生成合成数据,以便在训练之前平衡所有类的样本大小。我收到如下错误消息:

Traceback (most recent call last):
  File "data_augmentation.py", line 100, in <module>
    run(fish_class_aug_fold[i])
  File "data_augmentation.py", line 93, in run
    data_augmentation(img_handle, fish_class, aug_fold)
  File "data_augmentation.py", line 52, in data_augmentation
    img = session.run(img)
  File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 778, in run
    run_metadata_ptr)
  File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 982, in _run
feed_dict_string, options, run_metadata)
  File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1032, in _do_run
    target_list, options, run_metadata)
  File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1039, in _do_call
    return fn(*args)
  File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1017, in _run_fn
self._extend_graph()
  File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1061, in _extend_graph
add_shapes=self._add_shapes)
  File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2154, in _as_graph_def
    raise ValueError("GraphDef cannot be larger than 2GB.")
ValueError: GraphDef cannot be larger than 2GB.

这是我的代码:

def data_augmentation(img_handle, fish_class, nb_fold):
"""
This function is to generate synthetic pics for each class
parameters:
img_handle: a path for each input img
class: name of each class in this problem
nb_fold: an integer which indicates the number of folds that should run for each class
for generating the same number of images for each class.
"""

img = cv2.imread(img_handle)
# randomly adjust the hue of the img
img = tf.image.random_hue(img, max_delta=0.3)

# randomly adjust the contrust
img = tf.image.random_contrast(img,lower=0.3, upper=1.0)

# randomly adjust the brightness
img = tf.image.random_brightness(img, max_delta=0.2)

# randomly adjust the saturation
img = tf.image.random_saturation(img, lower=0.0, upper=2.0)

with tf.Session() as session:
    # this output is np.ndarray
    img = session.run(img)

datagen = ImageDataGenerator(
    rotation_range=45,
    width_shift_range=0.2,
    height_shift_range=0.2,
    rescale = 1./255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True,
    fill_mode='nearest')

x = img.reshape((1,) + img.shape)  # this is a Numpy array with shape (1, 3, height, width)

i = 0
for batch in datagen.flow(x, batch_size=1, save_to_dir = data_dir+class, \
                            save_prefix=class, save_format='jpg'):
    i += 1
    if i > nb_fold-1:
        break  

我的想法是通过使用&tff.image&#39;随机更改输入图像。函数,并使用tf.image的输出作为keras.preprocessing.image.ImageDataGenerator()的输入,以在训练前生成合成图像。

我认为问题来自session.run(img)。 我不明白为什么会这样,以及如何解决它。

有什么想法吗?

非常感谢!

1 个答案:

答案 0 :(得分:0)

1280 x 720 可能太大。我之前使用相似尺寸进行图像/视频识别时遇到了同样的问题。尝试缩小您的图片按 4倍,然后再试一次

columns = 1280/4
rows = 720/4

img = cv2.imread(img_handle)
img = cv2.resize(img, (columns, rows))
# add the rest of your code here

,请尝试为每个会话使用不同的默认图表,以防止图表扩展超出2 GB的限制:

with tf.Session() as session, tf.Graph().as_default():
    img = session.run(img)

最后,您可能还有兴趣使用 TensorBoard 来可视化图表:https://www.tensorflow.org/get_started/graph_viz