我正在尝试建立一个神经网络来同时处理多个摄像头(或几乎至少......)。首先,我试图用OpenCV和python线程模块同时传输2个摄像头。
我想出了这段代码:
import cv2
import threading
import queue
def multistream(stream, q):
ret, frame = stream.read()
q.put(frame)
if __name__ == "__main__":
camlink1 = "rtsp://......link1"
camlink2 = "rtsp://......link2"
stream1 = cv2.VideoCapture(camlink1)
stream2 = cv2.VideoCapture(camlink2)
print("stream is opened")
while True:
q = queue.Queue()
cam1 = threading.Thread(target=multistream, args=(stream1, q))
cam2 = threading.Thread(target=multistream, args=(stream2, q))
cam1.start()
cam2.start()
cam1.join()
cam2.join()
while not q.empty():
cv2.imshow("video", q.get())
问题是cv2.imshow显示一个空窗口而不是框架,而如果我添加到代码print(q.get())
,它会打印制作框架的垫子,因此框架正确地返回多线程函数到主线程。对此有什么正确的解决方法?