我最近设置了一个Raspberry Pi摄像头,并通过RTSP传输帧。虽然它可能不是完全必要的,但这是我正在使用广播视频的命令:
raspivid -o - -t 0 -w 1280 -h 800 |cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/output.h264}' :demux=h264
这完美地播放了视频。
我现在要做的是用Python解析这个流并分别读取每个帧。我想做一些运动检测用于监视目的。
我完全迷失在这个任务的起点。谁能指点我一个好的教程?如果这不能通过Python实现,我可以使用哪些工具/语言来实现这一目标?
答案 0 :(得分:10)
一个hacky解决方案,但你可以使用VLC python bindings(你可以用pip install python-vlc
安装它)并播放流:
import vlc
player=vlc.MediaPlayer('rtsp://:8554/output.h264')
player.play()
然后每隔一秒拍一次快照:
while 1:
time.sleep(1)
player.video_take_snapshot(0, '.snapshot.tmp.png', 0, 0)
然后您可以使用SimpleCV或其他内容进行处理(只需将图像文件'.snapshot.tmp.png'
加载到您的处理库中)。
答案 1 :(得分:4)
使用opencv
video=cv2.VideoCapture("rtsp url")
然后你可以捕获framse。阅读openCV文档访问:https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html
答案 2 :(得分:3)
使用“ depu”列出的相同方法非常适合我。 我只是用实际摄像机的“ RTSP URL”替换了“视频文件”。 以下示例适用于AXIS IP摄像机。 (这在以前的OpenCV版本中暂时无法使用) 在OpenCV 3.4.1 Windows 10上可以使用
import cv2
cap = cv2.VideoCapture("rtsp://root:pass@192.168.0.91:554/axis-media/media.amp")
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('frame', frame)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
答案 3 :(得分:2)
根据流类型,您可以查看此项目以获取一些想法。
https://code.google.com/p/python-mjpeg-over-rtsp-client/
如果你想成为mega-pro,你可以使用http://opencv.org/(我认为可用的Python模块)来处理运动检测。
答案 4 :(得分:0)
您可以使用python和OpenCV实现视频中的阅读框架。下面是示例代码。适用于python和opencv2版本。
import cv2
import os
#Below code will capture the video frames and will sve it a folder (in current working directory)
dirname = 'myfolder'
#video path
cap = cv2.VideoCapture("TestVideo.mp4")
count = 0
while(cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
else:
cv2.imshow('frame', frame)
#The received "frame" will be saved. Or you can manipulate "frame" as per your needs.
name = "rec_frame"+str(count)+".jpg"
cv2.imwrite(os.path.join(dirname,name), frame)
count += 1
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
答案 5 :(得分:0)
这里还有一个选择
比其他答案要复杂得多。 :-O
但是,通过这种方式,只需与摄像机建立一个连接,就可以将同一流同时“分叉”到多个多进程,屏幕,将其重播为多播,将其写入磁盘等。
..当然,以防万一,您需要类似的东西(否则,您会更喜欢早先的答案)
我们创建两个独立的python程序:
(1)服务器程序(rtsp连接,解码) server.py
(2)客户端程序(从共享内存中读取帧) client.py
服务器必须在客户端之前启动,即
python3 server.py
然后在另一个终端:
python3 client.py
代码如下:
(1) server.py
import time
from valkka.core import *
# YUV => RGB interpolation to the small size is done each 1000 milliseconds and passed on to the shmem ringbuffer
image_interval=1000
# define rgb image dimensions
width =1920//4
height =1080//4
# posix shared memory: identification tag and size of the ring buffer
shmem_name ="cam_example"
shmem_buffers =10
shmem_filter =RGBShmemFrameFilter(shmem_name, shmem_buffers, width, height)
sws_filter =SwScaleFrameFilter("sws_filter", width, height, shmem_filter)
interval_filter =TimeIntervalFrameFilter("interval_filter", image_interval, sws_filter)
avthread =AVThread("avthread",interval_filter)
av_in_filter =avthread.getFrameFilter()
livethread =LiveThread("livethread")
ctx =LiveConnectionContext(LiveConnectionType_rtsp, "rtsp://user:password@192.168.x.x", 1, av_in_filter)
avthread.startCall()
livethread.startCall()
avthread.decodingOnCall()
livethread.registerStreamCall(ctx)
livethread.playStreamCall(ctx)
# all those threads are written in cpp and they are running in the
# background. Sleep for 20 seconds - or do something else while
# the cpp threads are running and streaming video
time.sleep(20)
# stop threads
livethread.stopCall()
avthread.stopCall()
print("bye")
(2) client.py
import cv2
from valkka.api2 import ShmemRGBClient
width =1920//4
height =1080//4
# This identifies posix shared memory - must be same as in the server side
shmem_name ="cam_example"
# Size of the shmem ringbuffer - must be same as in the server side
shmem_buffers =10
client=ShmemRGBClient(
name =shmem_name,
n_ringbuffer =shmem_buffers,
width =width,
height =height,
mstimeout =1000, # client timeouts if nothing has been received in 1000 milliseconds
verbose =False
)
while True:
index, isize = client.pull()
if (index==None):
print("timeout")
else:
data =client.shmem_list[index][0:isize]
img =data.reshape((height,width,3))
img =cv2.GaussianBlur(img, (21, 21), 0)
cv2.imshow("valkka_opencv_demo",img)
cv2.waitKey(1)
如果您有兴趣,请在https://elsampsa.github.io/valkka-examples/中查看更多内容