我正在使用带有picamera和opencv python模块的raspberry pi尝试进行一些快速捕获和处理。 目前我正在使用http://picamera.readthedocs.org/en/latest/recipes2.html#rapid-capture-and-processing中的配方将每个图像捕获到BytesIO流。然后我在ImageProccessor类中添加了代码,将每个流转换为opencv对象并“动态”进行一些分析。
我当前的代码看起来像是:
import io
import time
import threading
import picamera
import cv2
import picamera.array
import numpy as np
# Create a pool of image processors
done = False
lock = threading.Lock()
pool = []
class ImageProcessor(threading.Thread):
def __init__(self):
super(ImageProcessor, self).__init__()
self.stream = io.BytesIO()
self.event = threading.Event()
self.terminated = False
self.start()
def run(self):
# This method runs in a separate thread
global done
while not self.terminated:
# Wait for an image to be written to the stream
if self.event.wait(1):
try:
self.stream.seek(0)
# Read the image and do some processing on it
# Construct a numpy array from the stream
data = np.fromstring(self.stream.getvalue(), dtype=np.uint8)
# "Decode" the image from the array, preserving colour
image = cv2.imdecode(data, 1)
# Here goes more opencv code doing image proccessing
# Set done to True if you want the script to terminate
# at some point
#done=True
finally:
# Reset the stream and event
self.stream.seek(0)
self.stream.truncate()
self.event.clear()
# Return ourselves to the pool
with lock:
pool.append(self)
def streams():
while not done:
with lock:
if pool:
processor = pool.pop()
else:
processor = None
if processor:
yield processor.stream
processor.event.set()
else:
# When the pool is starved, wait a while for it to refill
print ("Waiting")
time.sleep(0.1)
with picamera.PiCamera() as camera:
pool = [ImageProcessor() for i in range(4)]
camera.resolution = (640, 480)
camera.framerate = 30
camera.start_preview()
time.sleep(2)
camera.capture_sequence(streams(), use_video_port=True)
# Shut down the processors in an orderly fashion
while pool:
with lock:
processor = pool.pop()
processor.terminated = True
processor.join()
问题在于这涉及每个图像的JPEG编码和解码,这是有损耗且耗时的。建议的替代方案是捕获到picamera.array:http://picamera.readthedocs.org/en/latest/recipes1.html#capturing-to-an-opencv-object,对于单个图像代码:
import time
import picamera
import picamera.array
import cv2
with picamera.PiCamera() as camera:
camera.start_preview()
time.sleep(2)
with picamera.array.PiRGBArray(camera) as stream:
camera.capture(stream, format='bgr')
# At this point the image is available as stream.array
image = stream.array
效果很好,但我不知道如何组合这两段代码,以便ImageProcessor类定义picamera.array而不是BytesIO流。需要使用“with”语句为picamera.array生成流会让我感到困惑(我是python ...;)的新手)。 谢谢你的任何指示。 天使
答案 0 :(得分:2)
我发现你可以从picamera模块中引用来源。
def raw_resolution(resolution):
"""
Round a (width, height) tuple up to the nearest multiple of 32 horizontally
and 16 vertically (as this is what the Pi's camera module does for
unencoded output).
"""
width, height = resolution
fwidth = (width + 31) // 32 * 32
fheight = (height + 15) // 16 * 16
return fwidth, fheight
def bytes_to_rgb(data, resolution):
"""
Converts a bytes objects containing RGB/BGR data to a `numpy`_ array.
"""
width, height = resolution
fwidth, fheight = raw_resolution(resolution)
if len(data) != (fwidth * fheight * 3):
raise PiCameraValueError(
'Incorrect buffer length for resolution %dx%d' % (width, height))
# Crop to the actual resolution
return np.frombuffer(data, dtype=np.uint8).\
reshape((fheight, fwidth, 3))[:height, :width, :]
您可以通过致电转换
image = bytes_to_rgb(self.stream.getvalue(),resolution)
其中分辨率为(宽度,高度)。 camera
传递给PiRGBArray
的原因是能够参考相机的分辨率。