tf.gfile.GFile导致docker容器内的内存泄漏

时间:2019-03-19 16:46:17

标签: docker tensorflow file-io memory-leaks bucket

我正在使用此功能从GCS存储桶中读取图像数据:

def _load_image(path, height, width):
try:
    with tf.gfile.GFile(path, 'rb') as fl:
        image_bytes = fl.read()
        image = cv2.imdecode(np.frombuffer(image_bytes, np.uint8), -1)
        return image.astype(np.float32)

except Exception as e:
    logging.exception('Error processing image %s: %s', path, str(e))
    return

它在读取图像数据并将其输出到.tfrecord文件的循环中被调用

def _write_tf_records(examples, output_filename):
    writer = tf.python_io.TFRecordWriter(output_filename)
    for example in tqdm(examples):
        try:
            image = _load_image(example['path'], height=HEIGHT, width=WIDTH)
            if image is not None:
                encoded_img_string = cv2.imencode('.jpg', image)[1].tostring()
                feature = {
                    'train/label': _bytes_feature(tf.compat.as_bytes(example['classname'])),
                    'train/image': _bytes_feature(tf.compat.as_bytes(encoded_img_string))
                }

                tf_example = tf.train.Example(features= tf.train.Features(feature=feature))
                writer.write(tf_example.SerializeToString())
        except Exception as e:
            print(e)
            pass
    writer.close()

在本地机器上执行该功能时,该功能可以从我的GCS存储桶读取图像。但是,当我尝试从在Docker容器内部运行此python脚本时,它开始迅速消耗RAM,最终崩溃(可能发生内存泄漏)。

我消除了其他可能的原因,并且非常确定GFile.read()是引起错误的原因。

解决此问题的建议?

0 个答案:

没有答案