我目前有两个脚本,一个脚本捕获素材并发布numpy数组值(servant.py),另一个脚本(master.py)然后使用opencv处理这些值以用于以后的面部识别实现。问题在于,由于发送的Internet包到达主脚本的时间非常延迟,因此现在它非常慢。我想知道是否有更好的方法?当然,现在每秒仅发送一帧,因此我需要脚本能够每秒处理24帧。
这是两个脚本:
master.py
import paho.mqtt.client as mqtt
import numpy as np
import json
import PIL
MQTT_SERVER = "iot.eclipse.org"
MQTT_PATH = "test_channel"
def on_connect(client, userdata, flags, rc):
print("connected with result code " + str(rc))
client.subscribe(MQTT_PATH)
def on_message(client, userdata, msg):
dtype = "uint8"
dshape = (480, 640, 3)
data = msg.payload
img_array = np.fromstring(data, dtype=dtype).reshape(dshape)
img = PIL.Image.fromarray(img_array)
img.save("img.png")
img.show()
client = mqtt.Client()
client.on_connect = on_connect
client.on_message = on_message
client.connect(MQTT_SERVER, 1883, 60)
client.loop_forever()
servant.py
import paho.mqtt.client as mqtt
import time
import cv2
import numpy
import json
MQTT_SERVER = "iot.eclipse.org"
MQTT_PATH = "test_channel"
mqttc = mqtt.Client()
mqttc.connect(MQTT_SERVER, 1883, 60)
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
MQTT_MESSAGE = frame.tostring()
mqttc.publish(MQTT_PATH, MQTT_MESSAGE)
mqttc.loop()
time.sleep(1)
答案 0 :(得分:0)
我认为您最好的选择是改用cvlc或gstreamer之类的东西从相机中流式传输,然后从opencv中简单地连接到它(cap = cv2.VideoCapture(“ http://some_url:some_port”)),然后您应该始终根据带宽,编解码器等获得合理的吞吐量。
答案 1 :(得分:0)
使用WebcamVideoStream
中的imutils
,将图像压缩为jpg,并将其发布为base64:
import base64
import cv2
from imutils.video import WebcamVideoStream
cap = WebcamVideoStream(0)
cap.stream.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.stream.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
cap.start()
while True:
frame = cap.read()
if frame is None:
print("Continue")
continue
retval, buffer = cv2.imencode('.jpg', frame)
jpg_as_text = base64.b64encode(buffer).decode('utf-8')
publish(jpg_as_text)