我的系统(实时高清视频)的主要目的是在检测到脸部后我们可以轻松检测到眼睛,我的系统需要知道每个人每分钟闭眼的次数,并跟踪他的眼睛还有,实现这个的任何想法?
也许像这个视频,但在真实视频中 https://www.youtube.com/watch?v=JL3Gbb9aY0c
答案 0 :(得分:2)
如果您正在检测眼睛,那么您有一些描述检测质量的指标值(例如,如果使用模板,则使用相关系数,如果搜索圆圈,则使用Hough变换峰值的强度)。
您可以执行一些实验并绘制此值,以查找发生眨眼时下降的程度
因此转换高值 - 低(低于某个级别) - 高(如10-10-11-9-11-10-5-2-6-11-10
的情况可能表示闪烁情况。
答案 1 :(得分:0)
这Python代码应该帮助你启动了与使用OpenCV的和Python眨眼检测。代码实现要求您下载landmarks文件并使用以下命令安装 dlib :
pip install https://pypi.python.org/packages/da/06/bd3e241c4eb0a662914b3b4875fc52dd176a9db0d4a2c915ac2ad8800e9e/dlib-19.7.0-cp36-cp36m-win_amd64.whl
现在,您一切顺利!您可以将地标(.dat)文件与以下代码放置在同一文件夹中,并在终端中运行以下脚本
import numpy as np
import dlib
from math import hypot
import time
cap = cv2.VideoCapture(0)#camera port 0
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
def midpoint(p1,p2):
return int((p1.x + p2.x)/2),int((p1.y + p2.y)/2)
font = cv2.FONT_HERSHEY_SIMPLEX
def get_blinking_ratio(eye_points, facial_landmarks):
left_point = (facial_landmarks.part(eye_points[0]).x, facial_landmarks.part(eye_points[0]).y)
right_point = (facial_landmarks.part(eye_points[3]).x, facial_landmarks.part(eye_points[3]).y)
hor_line = cv2.line(frame, left_point, right_point,(0,255,0), 1)
center_top = midpoint(facial_landmarks.part(eye_points[1]), facial_landmarks.part(eye_points[2]))
center_bottom = midpoint(facial_landmarks.part(eye_points[5]), facial_landmarks.part(eye_points[4]))
ver_line = cv2.line(frame, center_top, center_bottom,(0,255,0), 1)
#length of the line
hor_line_length = hypot((left_point[0] - right_point[0]), (left_point[1] - right_point[1]))
ver_line_length = hypot((center_top[0] - center_bottom[0]), (center_top[1] - center_bottom[1]))
ratio = hor_line_length/ ver_line_length, ver_line_length
return ratio
blink = 1
TOTAL = 0
thres = 5.1
while True:
_, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)#for gray images(lightweight)
faces = detector(gray)
for face in faces:
#x, y = face.left(), face.top()
#x1, y1 = face.right(), face.bottom()
#cv2.rectangle(frame, (x,y), (x1,y1), (0,255,0), 3 )# green box, thickness of box
landmarks = predictor(gray, face)
left_eye_ratio,_ = get_blinking_ratio([36,37,38,39,40,41], landmarks)
right_eye_ratio, myVerti = get_blinking_ratio([42,43,44,45,46,47], landmarks)
blinking_ratio = (left_eye_ratio+right_eye_ratio)/2
personal_threshold = 0.67 * myVerti #0.67 is just the best constant I found with experimentation
cv2.putText(frame, "left ratio: {:.2f}".format(left_eye_ratio), (300, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
cv2.putText(frame, "right ratio: {:.2f}".format(right_eye_ratio), (500, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
if (left_eye_ratio>personal_threshold or right_eye_ratio>personal_threshold) and blink == 1:
TOTAL += 1
time.sleep(0.3)#average persons blinking time
if (left_eye_ratio>personal_threshold or right_eye_ratio>personal_threshold):
blink = 0
else:
blink = 1
cv2.putText(frame, "Blinks: {}".format(TOTAL), (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
cv2.imshow("Frame", frame)
key = cv2.waitKey(5)
if key == 27:
break
cap.release()
cv2.destroyAllWindow()
它可以检测和计数任何眼睛大小的人的眨眼。不幸的是,我有检测眨眼为人与眼镜的麻烦。
答案 2 :(得分:0)
if ear > EYE_AR_THRESH:
COUNTER += 1
else:
if COUNTER >= EYE_AR_CONSEC_FRAMES:
TOTAL = TOTAL+1
print(TOTAL)
# COUNTER = 0
# print(COUNTER)
cv2.putText(img, "Blinks: {}".format(TOTAL), (10, 30),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
cv2.putText(img, "EAR: {:.2f}".format(ear), (300, 30),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
cv2.imshow('detect eyes blink', img)
if cv2.waitKey(1) == ord('q'):
break
cap.release()
cv2.destroyAllWindows()