如何使用opencv在视频中指定视频尺寸

时间:2015-07-23 19:16:51

标签: python opencv

此程序确定视频中像素的显着变化,并打印出哪个帧以及发生变化的毫秒数。我在另一个程序中使用毫秒值来保存视频中该瞬间的图像,然后分析该图像以确定视频中人物的高度。问题是视频分辨率非常小。在媒体播放器中播放时,视频长度跨越整个屏幕,但高度仅为3英寸左右。当我基于毫秒值保存图像时,我得到的图像是黑色,因为这是小视频帧的上方和下方。我想改变在整个屏幕上看到视频的视频大小,并且当我根据毫秒值保存视频中的图像时,我不会变黑。这是我项目的重要组成部分。请帮我。非常感谢你。

我收到此错误: TypeError:参数' capture'

的预期CvCapture

这就是我改变视频高度和宽度的方法:

width = cv.SetCaptureProperty(capfile,cv.CV_CAP_PROP_FRAME_WIDTH,1280)

height = cv.SetCaptureProperty(capfile,cv.CV_CAP_PROP_FRAME_HEIGHT, 720)



import sys
import cv2
import cv
import numpy as np


# Advanced Scene Detection Parameters
INTENSITY_THRESHOLD = 16  # Pixel intensity threshold (0-255), default 16
MINIMUM_PERCENT     = 95   # The minimum amount of pixels allowed to be       below threshold.
BLOCK_SIZE          = 32    # Number of rows to sum per iteration.


def main():

capfile = 'camera20.h264'
cap = cv2.VideoCapture()
cap.open(capfile)

if not cap.isOpened():
    print "Fatal error - could not open video %s." % capfile
    return
else:
    print "Parsing video %s..." % capfile

# Do stuff with cap here.

width  = cv.SetCaptureProperty(capfile, cv.CV_CAP_PROP_FRAME_WIDTH, 1280)
height = cv.SetCaptureProperty(capfile,cv.CV_CAP_PROP_FRAME_HEIGHT, 720)
print "Video Resolution: %d x %d" % (width, height)

# Allow the threshold to be passed as an optional, second argument to the script.
threshold = 50

 print "Detecting scenes with threshold = %d" % threshold
 print "Min. pixels under threshold = %d %%" % MINIMUM_PERCENT
 print "Block/row size = %d" % BLOCK_SIZE
 print ""

 min_percent = MINIMUM_PERCENT / 100.0
 num_rows    = BLOCK_SIZE
 last_amt    = 0     # Number of pixel values above threshold in last frame.
 start_time  = cv2.getTickCount()  # Used for statistics after loop.

 while True:
     # Get next frame from video.
     (rv, im) = cap.read()
     if not rv:   # im is a valid image if and only if rv is true
         break

     # Compute # of pixel values and minimum amount to trigger fade.
     num_pixel_vals = float(im.shape[0] * im.shape[1] * im.shape[2])
     min_pixels     = int(num_pixel_vals * (1.0 - min_percent))

     # Loop through frame block-by-block, updating current sum.
     frame_amt = 0
     curr_row  = 0
     while curr_row < im.shape[0]:
        # Add # of pixel values in current block above the threshold.
        frame_amt += np.sum(
            im[curr_row : curr_row + num_rows,:,:] > threshold )
        if frame_amt > min_pixels:  # We can avoid checking the rest of the
            break                   # frame since we crossed the boundary.
        curr_row += num_rows

    # Detect fade in from black.
    if frame_amt >= min_pixels and last_amt < min_pixels:
        print "Detected fade in at %dms (frame %d)." % (
            cap.get(cv2.cv.CV_CAP_PROP_POS_MSEC),
            cap.get(cv2.cv.CV_CAP_PROP_POS_FRAMES) )

    # Detect fade out to black.
    elif frame_amt < min_pixels and last_amt >= min_pixels:
        print "Detected fade out at %dms (frame %d)." % (
            cap.get(cv2.cv.CV_CAP_PROP_POS_MSEC),
            cap.get(cv2.cv.CV_CAP_PROP_POS_FRAMES) )

    last_amt = frame_amt      # Store current mean to compare in next iteration.

 # Get # of frames in video based on the position of the last frame we read.
 frame_count = cap.get(cv2.cv.CV_CAP_PROP_POS_FRAMES)
 # Compute runtime and average framerate
 total_runtime = float(cv2.getTickCount() - start_time) /   cv2.getTickFrequency()
 avg_framerate = float(frame_count) / total_runtime

 print "Read %d frames from video in %4.2f seconds (avg. %4.1f FPS)." % (
     frame_count, total_runtime, avg_framerate)

 cap.release()


  if __name__ == "__main__":
   main()

1 个答案:

答案 0 :(得分:0)

您似乎正在混合使用新旧语法。试试这个:

cv2.VideoCapture.set(propId, value) → retval

而不是:

cv.SetCaptureProperty(capture, property_id, value) → retval