使用opencv捕获视频时并行运行进程

时间:2019-06-20 16:09:12

标签: python multithreading opencv

我正在使用opencv从网络摄像头捕获视频。每5秒钟,我要处理一帧/图像,这可能需要几秒钟。到目前为止,一切正常。但是,每处理完一帧,整个视频就会冻结几秒钟(直到该过程完成)。我试图通过使用Threading摆脱它。这是我到目前为止所做的:

在捕获视频的while循环中:

    while True:
        ret, image = cap.read()

        if next_time <= datetime.now():

            content_type = 'image/jpeg'
            headers = {'content-type': content_type}
            _, img_encoded = cv2.imencode('.jpg', image)

            loop = asyncio.get_event_loop()
            future = asyncio.ensure_future(self.async_faces(img_encoded, headers))
            loop.run_until_complete(future)

            next_time += period
            ...

        cv2.imshow('img', image)

以下是方法:

async def async_faces(self, img, headers):
    with ThreadPoolExecutor(max_workers=10) as executor:

        loop = asyncio.get_event_loop()

        tasks = [
            loop.run_in_executor(
                executor,
                self.face_detection,
                *(img, headers)  # Allows us to pass in multiple arguments to `fetch`
            )
        ]

        for response in await asyncio.gather(*tasks):
            pass

def face_detection(self, img, headers):
    try:
        response = requests.post(self.url, data=img.tostring(), headers=headers)
        ...
    except Exception as e:
        ...

    ...

但是不幸的是它没有用。

编辑1

在下面,我添加了整个过程应该做的事情。

该功能最初看起来像:

import requests
import cv2
from datetime import datetime, timedelta

def face_recognition(self):

    # Start camera
    cap = cv2.VideoCapture(0)

    cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)

    emotional_states = []
    font = cv2.FONT_HERSHEY_SIMPLEX

    period = timedelta(seconds=self.time_period)
    next_time = datetime.now() + period

    cv2.namedWindow('img', cv2.WND_PROP_FULLSCREEN)
    cv2.setWindowProperty('img', cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)

    while True:
        ret, image = cap.read()

        if next_time <= datetime.now():

            # Prepare headers for http request
            content_type = 'image/jpeg'
            headers = {'content-type': content_type}
            _, img_encoded = cv2.imencode('.jpg', image)

            try:
                # Send http request with image and receive response
                response = requests.post(self.url, data=img_encoded.tostring(), headers=headers)
                emotional_states = response.json().get("emotions")
                face_locations = response.json().get("locations")
            except Exception as e:
                emotional_states = []
                face_locations = []
                print(e)

            next_time += period

        for i in range(0, len(emotional_states)):
            emotion = emotional_states[i]
            face_location = face_locations[i]
            cv2.putText(image, emotion, (int(face_location[0]), int(face_location[1])),
                        font, 0.8, (0, 255, 0), 2, cv2.LINE_AA)

        cv2.imshow('img', image)
        k = cv2.waitKey(1) & 0xff
        if k == 27:
            cv2.destroyAllWindows()
            cap.release()
            break
        if k == ord('a'):
            cv2.resizeWindow('img', 700,700)

我使用上述方法拍摄自己。这部电影将在我的屏幕上实时显示。此外,每5秒钟将一帧发送到API,在该API中以一种使图像中的人的情感返回的方式处理图像。这种情绪会显示在我旁边的我的屏幕上。问题在于,实况视频冻结了几秒钟,直到从API返回情感为止。

我的操作系统是Ubuntu。

编辑2

API在本地运行。我创建了Flask应用,并且以下方法正在接收请求:

from flask import Flask, request, Response
import numpy as np
import cv2
import json

@app.route('/api', methods=['POST'])
def facial_emotion_recognition():

    # Convert string of image data to uint8
    nparr = np.fromstring(request.data, np.uint8)
    # Decode image
    img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)

    # Analyse the image
    emotional_state, face_locations = emotionDetection.analyze_facial_emotions(img)

    json_dump = json.dumps({'emotions': emotional_state, 'locations': face_locations}, cls=NumpyEncoder)

    return Response(json_dump, mimetype='application/json')

0 个答案:

没有答案