如何检测投资回报率中的人员移动

时间:2019-05-17 19:02:04

标签: opencv tensorflow python-3.6

我有一个视频,想知道一个人何时进入并停留在视频的特定区域,然后在视频中设置时间(视频时间),他进入的时间以及他何时离开以使用此时间剪辑视频。

我只是对opencv有所了解,但目前对tensorflow或keras没有任何经验。

这是用于视频分析的。 我已经尝试过诸如BackgroundSubtractorMOG之类的事情,使用其他分辨率等。

https://s18.directupload.net/images/190517/wym8r59b.png

https://s18.directupload.net/images/190517/pi52vgv7.png

def calc_accum_avg(frame, accumulated_weight):

    global background

    if background is None:
        background = frame.copy().astype("float")
        return None

    cv2.accumulateWeighted(frame, background, accumulated_weight)

def segment(frame, threshold=25):

    global background

    diff = cv2.absdiff(background.astype("uint8"),frame)

    _, thresholded = cv2.threshold(diff, threshold, 255, cv2.THRESH_BINARY)

    contours, hierarchy = cv2.findContours(thresholded.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

    if len(contours) == 0:
        return None

    else:
        move_segment = max(contours, key = cv2.contourArea)

        return (thresholded, move_segment)


def main():
  video = cv2.VideoCapture("/home/felix/Schreibtisch/OpenCVPython/large_video.mp4")
  video.set(3, 1920)
  video.set(4, 1080)
  length = int(video.get(cv2.CAP_PROP_FRAME_COUNT))
  print(length)
  num_frames = 0

  fgbg = cv2.bgsegm.createBackgroundSubtractorMOG()

  while True:
      ret,frame = video.read()
      fgmask = fgbg.apply(frame)
      if frame is None:
          return
      frame_copy = fgmask.copy()
      #frame2_copy =frame.copy()
      roi_visualiser = frame[roi_visualiser_top:roi_visualiser_bottom,roi_visualiser_right:roi_visualiser_left]
      roi_board = frame[roi_board_top:roi_board_bottom,roi_board_right:roi_board_left]
      gray = cv2.cvtColor(roi_visualiser, cv2.COLOR_BGR2GRAY)
      gray = cv2.GaussianBlur(gray, (9,9), 0)
      #gray = cv2.cvtColor(roi_board, cv2.COLOR_BGR2GRAY)
      #gray = cv2.GaussianBlur(gray, (9,9), 0)
      if num_frames < 2:
          calc_accum_avg(gray, accumulated_weight)
          #calc_accum_avg(gray2, accumulated_weight)
          if num_frames <= 1:
              cv2.imshow("Finger Count", frame_copy)

      else:
          hand = segment(gray)
          if hand is not None:
              thresholded, move_segment = hand
              cv2.drawContours(frame_copy, [move_segment + (roi_visualiser_right, roi_visualiser_top)], -1, (255,0,0), 1)
              #cv2.drawContours(frame_copy2, [move_segment + (roi_board_right, roi_board_top)], -1, (255,0,0), 1)
              fingers = count_moves(thresholded, move_segment)
              if fingers > 0:
                  print("ja") #test funktioniert
              else:
                  print("Nein")
              cv2.putText(frame_copy, str(fingers), (70,45), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,255,0), 2) #no need
              cv2.imshow("Thresholded", thresholded) #no need
      cv2.rectangle(frame_copy, (roi_visualiser_left, roi_visualiser_top), (roi_visualiser_right, roi_visualiser_bottom), (255,0,0), 1)
      cv2.rectangle(frame_copy, (roi_board_left, roi_board_top), (roi_board_right, roi_board_bottom), (255,0,0), 1)
      num_frames += 1

      cv2.imshow("Finger Count", frame_copy)

I get no error messages all runs fine, but i dont get the correct result i need.


  [1]: https://i.stack.imgur.com/dQbQi.png
  [2]: https://i.stack.imgur.com/MqOAc.png

1 个答案:

答案 0 :(得分:0)

您尝试过BackgroundSubtractorMOG2吗?它可以区分阴影,可以用来防止误报。
为了使处理更加有效,首先创建人进入/离开的区域的子图像。将背景减法应用于子图像。另外,如果帧有噪点,则在背景扣除之前应用模糊效果可以改善结果。

检查生成的蒙版中是否有相当大的白色物体。如果检测到,请使用video.get(CV_CAP_PROP_POS_FRAMES)将帧号存储在数组中,并停止记录帧号,直到蒙版再次变黑为止。