TLDR; 我的问题是如何从TFRecords加载压缩视频帧。
我正在设置一个数据管道,用于在大型视频数据集(Kinetics)上训练深度学习模型。为此,我使用的是TensorFlow,更具体地说是tf.data.Dataset
和TFRecordDataset
结构。由于数据集包含大约30万个10秒的视频,因此需要处理大量数据。在训练期间,我想从视频中随机采样64个连续帧,因此快速随机采样非常重要。为实现这一目标,培训期间可能存在许多数据加载方案:
ffmpeg
或OpenCV
和示例帧加载视频。在视频中寻找是不太理想的,并且解码视频流比解码JPG要慢得多。TFRecords
或HDF5
个文件。需要更多的工作才能准备好管道,但最有可能是这些选项中最快的。我决定使用选项(3)并使用TFRecord
文件来存储数据集的预处理版本。但是,这也不像看起来那么简单,例如:
我编写了以下代码来预处理视频数据集,并将视频帧写为TFRecord文件(每个大小约为5GB):
def _int64_feature(value):
"""Wrapper for inserting int64 features into Example proto."""
if not isinstance(value, list):
value = [value]
return tf.train.Feature(int64_list=tf.train.Int64List(value=value))
def _bytes_feature(value):
"""Wrapper for inserting bytes features into Example proto."""
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
with tf.python_io.TFRecordWriter(output_file) as writer:
# Read and resize all video frames, np.uint8 of size [N,H,W,3]
frames = ...
features = {}
features['num_frames'] = _int64_feature(frames.shape[0])
features['height'] = _int64_feature(frames.shape[1])
features['width'] = _int64_feature(frames.shape[2])
features['channels'] = _int64_feature(frames.shape[3])
features['class_label'] = _int64_feature(example['class_id'])
features['class_text'] = _bytes_feature(tf.compat.as_bytes(example['class_label']))
features['filename'] = _bytes_feature(tf.compat.as_bytes(example['video_id']))
# Compress the frames using JPG and store in as bytes in:
# 'frames/000001', 'frames/000002', ...
for i in range(len(frames)):
ret, buffer = cv2.imencode(".jpg", frames[i])
features["frames/{:04d}".format(i)] = _bytes_feature(tf.compat.as_bytes(buffer.tobytes()))
tfrecord_example = tf.train.Example(features=tf.train.Features(feature=features))
writer.write(tfrecord_example.SerializeToString())
这很好用;数据集很好地写为TFRecord文件,帧为压缩JPG字节。我的问题是,如何在训练期间读取TFRecord文件,从视频中随机采样64帧并解码JPG图像。
根据tf.Data
上的TensorFlow's documentation,我们需要执行以下操作:
filenames = tf.placeholder(tf.string, shape=[None])
dataset = tf.data.TFRecordDataset(filenames)
dataset = dataset.map(...) # Parse the record into tensors.
dataset = dataset.repeat() # Repeat the input indefinitely.
dataset = dataset.batch(32)
iterator = dataset.make_initializable_iterator()
training_filenames = ["/var/data/file1.tfrecord", "/var/data/file2.tfrecord"]
sess.run(iterator.initializer, feed_dict={filenames: training_filenames})
关于如何使用图像执行此操作有很多示例,这非常简单。但是,对于帧的视频和随机采样,我被卡住了。 tf.train.Features
对象将帧存储为frame/00001
,frame/000002
等。我的第一个问题是如何从dataset.map()
函数中随机抽样一组连续帧?考虑因素是每个帧由于JPG压缩而具有可变数量的字节,需要使用tf.image.decode_jpeg
进行解码。
如何最好地设置从TFRecord文件中读取视频采样的任何帮助将不胜感激!
答案 0 :(得分:6)
将每个帧编码为单独的特征使得难以动态选择帧,因为tf.parse_example()
(和tf.parse_single_example()
)的签名要求在图形构建时固定解析的特征名称集。但是,您可以尝试将帧编码为包含JPEG编码字符串列表的单功能:
def _bytes_list_feature(values):
"""Wrapper for inserting bytes features into Example proto."""
return tf.train.Feature(bytes_list=tf.train.BytesList(value=values))
with tf.python_io.TFRecordWriter(output_file) as writer:
# Read and resize all video frames, np.uint8 of size [N,H,W,3]
frames = ...
features = {}
features['num_frames'] = _int64_feature(frames.shape[0])
features['height'] = _int64_feature(frames.shape[1])
features['width'] = _int64_feature(frames.shape[2])
features['channels'] = _int64_feature(frames.shape[3])
features['class_label'] = _int64_feature(example['class_id'])
features['class_text'] = _bytes_feature(tf.compat.as_bytes(example['class_label']))
features['filename'] = _bytes_feature(tf.compat.as_bytes(example['video_id']))
# Compress the frames using JPG and store in as a list of strings in 'frames'
encoded_frames = [tf.compat.as_bytes(cv2.imencode(".jpg", frame)[1].tobytes())
for frame in frames]
features['frames'] = _bytes_list_feature(encoded_frames)
tfrecord_example = tf.train.Example(features=tf.train.Features(feature=features))
writer.write(tfrecord_example.SerializeToString())
完成此操作后,可以使用your parsing code的修改版本动态切片frames
功能:
def decode(serialized_example, sess):
# Prepare feature list; read encoded JPG images as bytes
features = dict()
features["class_label"] = tf.FixedLenFeature((), tf.int64)
features["frames"] = tf.VarLenFeature(tf.string)
features["num_frames"] = tf.FixedLenFeature((), tf.int64)
# Parse into tensors
parsed_features = tf.parse_single_example(serialized_example, features)
# Randomly sample offset from the valid range.
random_offset = tf.random_uniform(
shape=(), minval=0,
maxval=parsed_features["num_frames"] - SEQ_NUM_FRAMES, dtype=tf.int64)
offsets = tf.range(random_offset, random_offset + SEQ_NUM_FRAMES)
# Decode the encoded JPG images
images = tf.map_fn(lambda i: tf.image.decode_jpeg(parsed_features["frames"].values[i]),
offsets)
label = tf.cast(parsed_features["class_label"], tf.int64)
return images, label
(请注意,我无法运行您的代码,因此可能会出现一些小错误,但希望这足以让您入门。)
答案 1 :(得分:3)
由于您使用非常相似的依赖项,我建议您查看以下Python包,因为它可以解决您的确切问题设置:
pip install video2tfrecord
或参考https://github.com/ferreirafabio/video2tfrecord。
它还应具有足够的适应性以使用tf.data.Dataset
。