我正在尝试将opencv帧流式传输到浏览器。经过研究,我遇到了Miguel的教程: https://blog.miguelgrinberg.com/post/video-streaming-with-flask/page/10
让我分解一下我要实现的目标:在主页上,我试图通过opencv实时传输opencv帧,而在另一页上,我需要使用网络摄像头拍照。 / p>
问题:使用Miguel的流式传输到浏览器的方式,启动了一个无限线程,在这种情况下,当我要在另一页上拍照时不会释放相机。切换回首页,出现此错误:
视频错误:V4L2:OpenCV不支持输入图像的像素格式
无法停止流:设备或资源繁忙
视频流开始
OpenCV(3.4.1)错误:cvtColor,文件/home/eli/cv/opencv-3.4.1/modules/imgproc/src/color.cpp,行中的断言失败(scn == 3 || scn == 4) 11115
调试中间件在流响应中已经发送响应头的地方捕获到异常。
这是我的代码:
detect_face_video.py
这是我进行人脸识别的地方
# import the necessary packages
from imutils.video import VideoStream
import face_recognition
import argparse
import imutils
import pickle
import time
import cv2
from flask import Flask, render_template, Response
import sys
import numpy
from app.cv_func import draw_box
import redis
import datetime
from app.base_camera import BaseCamera
import os
global red
red = redis.StrictRedis(host='localhost', port=6379, db=0, decode_responses=True)
class detect_face:
def gen(self):
i=1
while i<10:
yield (b'--frame\r\n'
b'Content-Type: text/plain\r\n\r\n'+str(i)+b'\r\n')
i+=1
def get_frame(self):
dir_path = os.path.dirname(os.path.realpath(__file__))
# load the known faces and embeddings
print("[INFO] loading encodings...")
"rb").read())
data = pickle.loads(open("%s/encode.pickle"%dir_path, "rb").read())
# initialize the video stream and pointer to output video file, then
# allow the camera sensor to warm up
print("[INFO] starting video stream...")
try:
vs = VideoStream(src=1).start()
except Exception as ex:
vs.release()
print("video stream started")
# loop over frames from the video file stream
i=1
counter = 1
while True:
# grab the frame from the threaded video stream
try:
frame = vs.read()
except Exception as ex:
print("an error occured here")
print(ex)
# finally:
continue
# convert the input frame from BGR to RGB then resize it to have
# a width of 750px (to speedup processing)
rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
rgb = imutils.resize(frame, width=450, height=400)
r = frame.shape[1] / float(rgb.shape[1])
# detect the (x, y)-coordinates of the bounding boxes
# corresponding to each face in the input frame, then compute
# the facial embeddings for each face
boxes = face_recognition.face_locations(rgb,
model="hog")
# boxes = face_recognition.face_locations(rgb,
# model=args["detection_method"])
encodings = face_recognition.face_encodings(rgb, boxes)
names = []
# loop over the facial embeddings
for encoding in encodings:
print(encoding)
# attempt to match each face in the input image to our known
# encodings
matches = face_recognition.compare_faces(data["encodings"],
encoding)
# matches = face_recognition.compare_faces(data["encodings"],
# encoding)
name = "Unknown"
# check to see if we have found a match
if True in matches:
# find the indexes of all matched faces then initialize a
# dictionary to count the total number of times each face
# was matched
matchedIdxs = [i for (i, b) in enumerate(matches) if b]
counts = {}
# loop over the matched indexes and maintain a count for
# each recognized face face
for i in matchedIdxs:
name = data["names"][i]
counts[name] = counts.get(name, 0) + 1
# determine the recognized face with the largest number
# of votes (note: in the event of an unlikely tie Python
# will select first entry in the dictionary)
name = max(counts, key=counts.get)
# update the list of names
names.append(name)
red.set('currentName', name)
# self.create_report(name, counter)
# f = open("tester.txt", 'w+')
key='StudentName%d'%counter
if(name != 'Unknown'):
red.set(key,name)
red.set('counter', counter)
counter+=1
# loop over the recognized faces
for ((top, right, bottom, left), name) in zip(boxes, names):
# rescale the face coordinates
top = int(top * r)
right = int(right * r)
bottom = int(bottom * r)
left = int(left * r)
# print("top: %d right: %d bottom: %d left: %d"%(top,right,bottom,left))
# print("top_: %d right_: %d bottom_: %d left_: %d"%(top_,right_,bottom_,left_))
# draw the predicted face name on the image
cv2.rectangle(frame, (left, top), (right, bottom),
(0, 255, 0), 2)
# draw_box(frame, int(left/2), int(top/2), int(right/2), int(bottom/2))
y = top - 15 if top - 15 > 15 else top + 15
cv2.putText(frame, name, (left, y), cv2.FONT_HERSHEY_SIMPLEX,
0.75, (0, 255, 0), 2)
imgencode=cv2.imencode('.jpg',frame)[1]
stringData = imgencode.tostring()
yield(b'--frame\r\n'
b'Content-Type: text/plain\r\n\r\n'+stringData+b'\r\n')
i+=1
del(vs)
cv2.destroyAllWindows()
vs.stop()
和路由文件(我只粘贴了重要部分): route.py
from flask import Flask, render_template, request,Response,jsonify,make_response
from app.detect_face_video import detect_face
detect = detect_face()
@app.route('/')
def index():
return render_template('index.html')
def get_frame_():
detect.gen()
detect.get_frame()
@app.route('/calc')
def calc():
#This function displays the video streams in the webpage
# detect.vs.stop()
return Response(detect.get_frame(),mimetype='multipart/x-mixed-replace; boundary=frame')
我离开该页面(主页)时如何停止或说暂停流式播放?
答案 0 :(得分:1)
如果您正在寻找更快,更强大,更简单的方法来将帧流传输到浏览器,则可以使用VidGear Python库的WebGear,它是功能强大的ASGI视频流API,建立在Starlette之上-一种轻量级的ASGI异步框架/工具包。
到目前为止,此API仅在testing
分支中可用,因此请使用以下命令进行安装:
要求::仅适用于Python 3.6 +。
git clone https://github.com/abhiTronix/vidgear.git
cd vidgear
git checkout testing
sudo pip3 install .
sudo pip3 uvicorn #additional dependency
cd
然后,您可以使用此完整的python示例,只需几行代码即可在网络上任何浏览器上的地址http://0.0.0.0:8000/上运行视频服务器:
#import libs
import uvicorn
from vidgear.gears import WebGear
#various performance tweaks
options = {"frame_size_reduction": 40, "frame_jpeg_quality": 80, "frame_jpeg_optimize": True, "frame_jpeg_progressive": False}
#initialize WebGear app with suitable video file (for e.g `foo.mp4`)
web = WebGear(source = "foo.mp4", logging = True, **options)
#run this app on Uvicorn server at address http://0.0.0.0:8000/
uvicorn.run(web(), host='0.0.0.0', port=8000)
#close app safely
web.shutdown()
如果仍然出现错误,请在其GitHub存储库中提出一个issue here。