在python opencv中通过网络发送实时视频帧

时间:2015-06-22 19:15:31

标签: python opencv numpy

我试图将我用相机捕获的实时视频帧发送到服务器并处理它们。我使用opencv进行图像处理,使用python进行语言处理。这是我的代码

client_cv.py

import cv2
import numpy as np
import socket
import sys
import pickle
cap=cv2.VideoCapture(0)
clientsocket=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
clientsocket.connect(('localhost',8089))
while True:
    ret,frame=cap.read()
    print sys.getsizeof(frame)
    print frame
    clientsocket.send(pickle.dumps(frame))

server_cv.py

import socket
import sys
import cv2
import pickle
import numpy as np
HOST=''
PORT=8089

s=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
print 'Socket created'

s.bind((HOST,PORT))
print 'Socket bind complete'
s.listen(10)
print 'Socket now listening'

conn,addr=s.accept()

while True:
    data=conn.recv(80)
    print sys.getsizeof(data)
    frame=pickle.loads(data)
    print frame
    cv2.imshow('frame',frame)

这段代码给了我文件错误的结束,这是合乎逻辑的,因为数据总是不停地进入服务器,并且pickle不知道何时完成。我在互联网上的搜索使我使用泡菜,但到目前为止它还没有用。

注意:我将conn.recv设置为80,因为这是我说print sys.getsizeof(frame)时的数字。

7 个答案:

答案 0 :(得分:15)

少数事情:

  • 使用sendall代替send,因为您无法保证一切都会一次性发送
  • pickle可用于数据序列化,但您必须制定协议 你拥有在客户端和服务器之间交换的消息,这个 你可以事先知道为unpickling读取的数据量(见 下文)
  • 对于recv如果你收到大块的话,你会获得更好的表现,所以将80替换为4096甚至更多
  • 小心sys.getsizeof:它返回内存中对象的大小,而不是 与通过网络发送的字节的大小(长度)相同;为一个 Python字符串这两个值根本不相同
  • 注意您发送的帧的大小。下面的代码支持最大65535的帧。如果您有一个更大的帧,请将“H”更改为“L”。

协议示例:

client_cv.py

import cv2
import numpy as np
import socket
import sys
import pickle
import struct ### new code
cap=cv2.VideoCapture(0)
clientsocket=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
clientsocket.connect(('localhost',8089))
while True:
    ret,frame=cap.read()
    data = pickle.dumps(frame) ### new code
    clientsocket.sendall(struct.pack("H", len(data))+data) ### new code

server_cv.py

import socket
import sys
import cv2
import pickle
import numpy as np
import struct ## new

HOST=''
PORT=8089

s=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
print 'Socket created'

s.bind((HOST,PORT))
print 'Socket bind complete'
s.listen(10)
print 'Socket now listening'

conn,addr=s.accept()

### new
data = ""
payload_size = struct.calcsize("H") 
while True:
    while len(data) < payload_size:
        data += conn.recv(4096)
    packed_msg_size = data[:payload_size]
    data = data[payload_size:]
    msg_size = struct.unpack("H", packed_msg_size)[0]
    while len(data) < msg_size:
        data += conn.recv(4096)
    frame_data = data[:msg_size]
    data = data[msg_size:]
    ###

    frame=pickle.loads(frame_data)
    print frame
    cv2.imshow('frame',frame)

你可以大量优化所有这些(减少复制,使用缓冲接口等),但至少你可以理解。

答案 1 :(得分:5)

经过数月的互联网搜索,我才想到了这一点,我将它整齐地打包到了类中,并通过SmoothStream进行了单元测试和文档检查,这是唯一的简单有效的版本流媒体,我可以在任何地方找到

我使用了这段代码并将我的代码包裹起来。

Viewer.py

import cv2
import zmq
import base64
import numpy as np

context = zmq.Context()
footage_socket = context.socket(zmq.SUB)
footage_socket.bind('tcp://*:5555')
footage_socket.setsockopt_string(zmq.SUBSCRIBE, np.unicode(''))

while True:
    try:
        frame = footage_socket.recv_string()
        img = base64.b64decode(frame)
        npimg = np.fromstring(img, dtype=np.uint8)
        source = cv2.imdecode(npimg, 1)
        cv2.imshow("Stream", source)
        cv2.waitKey(1)

    except KeyboardInterrupt:
        cv2.destroyAllWindows()
        break

Streamer.py

import base64
import cv2
import zmq

context = zmq.Context()
footage_socket = context.socket(zmq.PUB)
footage_socket.connect('tcp://localhost:5555')

camera = cv2.VideoCapture(0)  # init the camera

while True:
    try:
        grabbed, frame = camera.read()  # grab the current frame
        frame = cv2.resize(frame, (640, 480))  # resize the frame
        encoded, buffer = cv2.imencode('.jpg', frame)
        jpg_as_text = base64.b64encode(buffer)
        footage_socket.send(jpg_as_text)

    except KeyboardInterrupt:
        camera.release()
        cv2.destroyAllWindows()
        break

答案 2 :(得分:2)

我将代码从@mguijarr更改为可与Python 3一起使用。对代码进行的更改:

  • data现在是byte literal而不是字符串文字
  • 将“ H”更改为“ L”以发送更大的帧尺寸。现在,基于the documentation,我们可以发送大小为2 ^ 32的帧,而不仅仅是2 ^ 16。

Server.py

import pickle
import socket
import struct

import cv2

HOST = ''
PORT = 8089

s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print('Socket created')

s.bind((HOST, PORT))
print('Socket bind complete')
s.listen(10)
print('Socket now listening')

conn, addr = s.accept()

data = b'' ### CHANGED
payload_size = struct.calcsize("L") ### CHANGED

while True:

    # Retrieve message size
    while len(data) < payload_size:
        data += conn.recv(4096)

    packed_msg_size = data[:payload_size]
    data = data[payload_size:]
    msg_size = struct.unpack("L", packed_msg_size)[0] ### CHANGED

    # Retrieve all data based on message size
    while len(data) < msg_size:
        data += conn.recv(4096)

    frame_data = data[:msg_size]
    data = data[msg_size:]

    # Extract frame
    frame = pickle.loads(frame_data)

    # Display
    cv2.imshow('frame', frame)
    cv2.waitKey(1)

Client.py

import cv2
import numpy as np
import socket
import sys
import pickle
import struct

cap=cv2.VideoCapture(0)
clientsocket=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
clientsocket.connect(('localhost',8089))

while True:

    # Serialize frame
    data = pickle.dumps(frame)

    # Send message length first
    message_size = struct.pack("L", len(data)) ### CHANGED

    # Then data
    client_sock.sendall(message_size + data)

答案 3 :(得分:1)

我可以在MacOS上使用它。

我使用了@mguijarr中的代码,并将struct.pack从“ H”更改为“ L”。

Server.py:
==========
import socket
import sys
import cv2
import pickle
import numpy as np
import struct ## new

HOST=''
PORT=8089

s=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
print 'Socket created'

s.bind((HOST,PORT))
print 'Socket bind complete'
s.listen(10)
print 'Socket now listening'

conn,addr=s.accept()

### new
data = ""
payload_size = struct.calcsize("L") 
while True:
    while len(data) < payload_size:
        data += conn.recv(4096)
    packed_msg_size = data[:payload_size]
    data = data[payload_size:]
    msg_size = struct.unpack("L", packed_msg_size)[0]
    while len(data) < msg_size:
        data += conn.recv(4096)
    frame_data = data[:msg_size]
    data = data[msg_size:]
    ###

    frame=pickle.loads(frame_data)
    print frame
    cv2.imshow('frame',frame)

    key = cv2.waitKey(10)
    if (key == 27) or (key == 113):
        break

cv2.destroyAllWindows()



Client.py:
==========
import cv2
import numpy as np
import socket
import sys
import pickle
import struct ### new code
cap=cv2.VideoCapture(0)
clientsocket=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
clientsocket.connect(('localhost',8089))
while True:
    ret,frame=cap.read()
    data = pickle.dumps(frame) ### new code
    clientsocket.sendall(struct.pack("L", len(data))+data) ### new code

答案 4 :(得分:1)

最近,我发布了Fast版imagiz软件包,但没有一个软件包使用OpenCV和ZMQ阻止网络上的实时视频流。

https://pypi.org/project/imagiz/

客户:

import imagiz
import cv2


client=imagiz.Client("cc1",server_ip="localhost")
vid=cv2.VideoCapture(0)
encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 90]

while True:
    r,frame=vid.read()
    if r:
        r, image = cv2.imencode('.jpg', frame, encode_param)
        client.send(image)
    else:
        break

服务器:

import imagiz
import cv2

server=imagiz.Server()
while True:
    message=server.recive()
    frame=cv2.imdecode(message.image,1)
    cv2.imshow("",frame)
    cv2.waitKey(1)

答案 5 :(得分:1)

如@Rohan Sawant所说,我使用zmq库而不使用base64编码。这是新代码

Streamer.py

import base64
import cv2
import zmq
import numpy as np
import time

context = zmq.Context()
footage_socket = context.socket(zmq.PUB)
footage_socket.connect('tcp://192.168.1.3:5555')

camera = cv2.VideoCapture(0)  # init the camera

while True:
        try:
                grabbed, frame = camera.read()  # grab the current frame
                frame = cv2.resize(frame, (640, 480))  # resize the frame
                encoded, buffer = cv2.imencode('.jpg', frame)
                footage_socket.send(buffer)


        except KeyboardInterrupt:
                camera.release()
                cv2.destroyAllWindows()
                break

Viewer.py

import cv2
import zmq
import base64
import numpy as np

context = zmq.Context()
footage_socket = context.socket(zmq.SUB)
footage_socket.bind('tcp://*:5555')
footage_socket.setsockopt_string(zmq.SUBSCRIBE, np.unicode(''))

while True:
    try:
        frame = footage_socket.recv()
        npimg = np.frombuffer(frame, dtype=np.uint8)
        #npimg = npimg.reshape(480,640,3)
        source = cv2.imdecode(npimg, 1)
        cv2.imshow("Stream", source)
        cv2.waitKey(1)

    except KeyboardInterrupt:
        cv2.destroyAllWindows()
        break

答案 6 :(得分:0)

我有点晚了,但是我强大且线程化的VidGear视频处理python库现在提供了NetGear API,该库专门用于在网络上的互连系统之间实时实时传输视频帧。这是一个示例:

A。服务器端:(最低限度的示例)

打开您喜欢的终端并执行以下python代码:

注意: 您可以通过在服务器端按键盘上的[Ctrl + c]来随时在服务器和客户端上结束流式传输!

# import libraries
from vidgear.gears import VideoGear
from vidgear.gears import NetGear

stream = VideoGear(source='test.mp4').start() #Open any video stream
server = NetGear() #Define netgear server with default settings

# infinite loop until [Ctrl+C] is pressed
while True:
    try: 
        frame = stream.read()
        # read frames

        # check if frame is None
        if frame is None:
            #if True break the infinite loop
            break

        # do something with frame here

        # send frame to server
        server.send(frame)

    except KeyboardInterrupt:
        #break the infinite loop
        break

# safely close video stream
stream.stop()
# safely close server
writer.close()

B。客户端:(最低限度的示例)

然后在同一系统上打开另一个终端,并执行以下python代码并查看输出:

# import libraries
from vidgear.gears import NetGear
import cv2

#define netgear client with `receive_mode = True` and default settings
client = NetGear(receive_mode = True)

# infinite loop
while True:
    # receive frames from network
    frame = client.recv()

    # check if frame is None
    if frame is None:
        #if True break the infinite loop
        break

    # do something with frame here

    # Show output window
    cv2.imshow("Output Frame", frame)

    key = cv2.waitKey(1) & 0xFF
    # check for 'q' key-press
    if key == ord("q"):
        #if 'q' key-pressed break out
        break

# close output window
cv2.destroyAllWindows()
# safely close client
client.close()

NetGear当前支持两种ZeroMQ消息传递模式:即zmq.PAIRzmq.REQ and zmq.REP,支持的协议是:'tcp', 'upd', 'pgm', 'inproc', 'ipc'

可以在此处找到更多高级用法: https://github.com/abhiTronix/vidgear/wiki/NetGear