视频帧作为Tensorflow图的输入

时间:2017-03-23 14:18:58

标签: python opencv tensorflow video-streaming video-processing

更具体地说,如何创建自定义阅读器,从视频中读取帧并将其提供给 tensorflow 模型图。

其次,如果可能的话,如何使用opencv解码帧来创建自定义阅读器?

是否有任何代码可以更好地展示目的(在python中)?

我主要致力于通过面部表情进行情感识别,我在我的数据库中输入视频。

最后,我尝试使用Queue和QueueRunner与Coordinator一起希望解决手头的问题。根据{{​​3}}中的文档,QueueRunner运行enqueue操作,然后进行操作以创建一个示例(我们可以在此操作中使用opencv,创建一个示例,将帧作为示例返回到入列?)

请注意,我的目的是让入队和出队操作在不同的线程上同时进行。

以下是我的代码:

def deform_images(images):
    with tf.name_scope('current_image'):
        frames_resized = tf.image.resize_images(images, [90, 160])
        frame_gray = tf.image.rgb_to_grayscale(frames_resized, name='rgb_to_gray')
        frame_normalized = tf.divide(frame_gray, tf.constant(255.0), name='image_normalization')

        tf.summary.image('image_summmary', frame_gray, 1)
        return frame_normalized

def queue_input(video_path, coord):
    global frame_index
    with tf.device("/cpu:0"):
        # keep looping infinitely

        # source: http://stackoverflow.com/questions/33650974/opencv-python-read-specific-frame-using-videocapture
        cap = cv2.VideoCapture(video_path)
        cap.set(1, frame_index)

        # read the next frame from the file, Note that frame is returned as a Mat.
        # So we need to convert that into a tensor.
        (grabbed, frame) = cap.read()

        # if the `grabbed` boolean is `False`, then we have
        # reached the end of the video file
        if not grabbed:
            coord.request_stop()
            return

        img = np.asarray(frame)
        frame_index += 1
        to_retun = deform_images(img)
        print(to_retun.get_shape())
        return to_retun

frame_num = 1

with tf.Session() as sess:
    merged = tf.summary.merge_all()
    train_writer = tf.summary.FileWriter('C:\\Users\\temp_user\\Documents\\tensorboard_logs', sess.graph)
    tf.global_variables_initializer()

    coord = tf.train.Coordinator()
    queue = tf.FIFOQueue(capacity=128, dtypes=tf.float32, shapes=[90, 160, 1])
    enqueue_op = queue.enqueue(queue_input("RECOLA-Video-recordings\\P16.mp4", coord))

    # Create a queue runner that will run 1 threads in parallel to enqueue
    # examples. In general, the queue runner class is used to create a number of threads cooperating to enqueue
    # tensors in the same queue.
    qr = tf.train.QueueRunner(queue, [enqueue_op] * 1)

    # Create a coordinator, launch the queue runner threads.
    # Note that the coordinator class helps multiple threads stop together and report exceptions to programs that wait
    # for them to stop.
    enqueue_threads = qr.create_threads(sess, coord=coord, start=True)

    # Run the training loop, controlling termination with the coordinator.
    for step in range(8000):
        print(step)
        if coord.should_stop():
            break

        frames_tensor = queue.dequeue(name='dequeue')
        step += 1

    coord.join(enqueue_threads)

train_writer.close()
cv2.destroyAllWindows()

谢谢!

2 个答案:

答案 0 :(得分:2)

tf.QueueRunner不是最合适的机制。在您拥有的代码中,以下行

enqueue_op = queue.enqueue(queue_input("RECOLA-Video-recordings\\P16.mp4", coord))

创建将enqueue_op排队的常量张量,即每次运行时从queue_input函数返回的第一帧。即使QueueRunner重复调用它,它总是将相同的张量排队,即在创建操作期间提供给它的张量。相反,您可以简单地使enqueue操作以tf.placeholder作为参数,并在循环中重复运行它,将其提供给您通过OpenCV获取的帧。以下是一些指导您的代码。

frame_ph = tf.placeholder(tf.float32)
enqueue_op = queue.enqueue(frame_ph)

def enqueue():
  while not coord.should_stop():
    frame = queue_input(video_path, coord)
    sess.run(enqueue_op, feed_dict={frame_ph: frame})

threads = [threading.Thread(target=enqueue)]

for t in threads:
  t.start()

# Your dequeue and training code goes here
coord.join(threads)

答案 1 :(得分:2)

pip install video2tfrecord

解释

在一个研究项目中,我遇到了用Python中的原始视频材料生成tfrecords。 遇到许多与此线程非常相似的类似请求,我在

下提供了我的代码的一部分

https://github.com/ferreirafabio/video2tfrecords