如何将实时视频帧从客户端流传输到flask服务器,再流回客户端?

时间:2019-11-19 10:28:24

标签: javascript python flask webrtc flask-socketio

我正在尝试建立一个客户端服务器体系结构,在该体系结构中,我使用getUserMedia()从用户的网络摄像头捕获实时视频。现在,我不想将视频直接显示在<video>标签中,而是希望将其发送到我的烧瓶服务器,对帧进行一些处理,然后将其返回到我的网页。

我已经使用socketio创建了客户端-服务器连接。 这是我的 index.html 中的脚本。请原谅我的错误或任何错误的代码。

<div id="container">
    <video autoplay="true" id="videoElement">

    </video>
</div>
<script type="text/javascript" charset="utf-8">

    var socket = io('http://127.0.0.1:5000');

    // checking for connection
    socket.on('connect', function(){
      console.log("Connected... ", socket.connected)
    });

    var video = document.querySelector("#videoElement");


    // asking permission to access the system camera of user, capturing live 
    // video on getting true.

    if (navigator.mediaDevices.getUserMedia) {
      navigator.mediaDevices.getUserMedia({ video: true })
        .then(function (stream) {

          // instead of showing it directly in <video>, I want to send these frame to server

          //video_t.srcObject = stream

          //this code might be wrong, but this is what I want to do.
          socket.emit('catch-frame', { image: true, buffer: getFrame() });
        })
        .catch(function (err0r) {
          console.log(err0r)
          console.log("Something went wrong!");
        });
    }

    // returns a frame encoded in base64
    const getFrame = () => {
        const canvas = document.createElement('canvas');
        canvas.width = video_t.videoWidth;
        canvas.height = video_t.videoHeight;
        canvas.getContext('2d').drawImage(video_t, 0, 0);
        const data = canvas.toDataURL('image/png');
        return data;
    }


    // receive the frame from the server after processed and now I want display them in either 
    // <video> or <img>
    socket.on('response_back', function(frame){

      // this code here is wrong, but again this is what something I want to do.
      video.srcObject = frame;
    });

</script>

在我的 app.py -

from flask import Flask, render_template
from flask_socketio import SocketIO, emit

app = Flask(__name__)
socketio = SocketIO(app)

@app.route('/', methods=['POST', 'GET'])
def index():
    return render_template('index.html')

@socketio.on('catch-frame')
def catch_frame(data):

    ## getting the data frames

    ## do some processing 

    ## send it back to client
    emit('response_back', data)  ## ??


if __name__ == '__main__':
    socketio.run(app, host='127.0.0.1')

我也曾想过通过WebRTC来做到这一点,但是我只获得了点对点的代码。

那么,有人可以帮我吗? 在此先感谢您的帮助。

2 个答案:

答案 0 :(得分:5)

因此,我想做的是获取由客户端的网络摄像头捕获的实时视频流,并在后端对其进行处理。

我的后端代码是用Python编写的,我正在使用SocketIo将帧从前端发送到后端。您可以看一下此设计,以更好地了解正在发生的事情- image

  1. 我的服务器(app.py)将在后端运行,并且客户端将访问index.html
  2. 将建立SocketIo连接,使用网络摄像头捕获的视频流将逐帧发送到服务器。
  3. 这些帧将在后端进行处理并发回客户端。
  4. 来自服务器的处理后的帧可以显示在img标签中。

这是工作代码-

app.py

@socketio.on('image')
def image(data_image):
    sbuf = StringIO()
    sbuf.write(data_image)

    # decode and convert into image
    b = io.BytesIO(base64.b64decode(data_image))
    pimg = Image.open(b)

    ## converting RGB to BGR, as opencv standards
    frame = cv2.cvtColor(np.array(pimg), cv2.COLOR_RGB2BGR)

    # Process the image frame
    frame = imutils.resize(frame, width=700)
    frame = cv2.flip(frame, 1)
    imgencode = cv2.imencode('.jpg', frame)[1]

    # base64 encode
    stringData = base64.b64encode(imgencode).decode('utf-8')
    b64_src = 'data:image/jpg;base64,'
    stringData = b64_src + stringData

    # emit the frame back
    emit('response_back', stringData)

index.html

<div id="container">
    <canvas id="canvasOutput"></canvas>
    <video autoplay="true" id="videoElement"></video>
</div>

<div class = 'video'>
    <img id="image">
</div>

<script>
    var socket = io('http://localhost:5000');

    socket.on('connect', function(){
        console.log("Connected...!", socket.connected)
    });

    const video = document.querySelector("#videoElement");

    video.width = 500; 
    video.height = 375; ;

    if (navigator.mediaDevices.getUserMedia) {
        navigator.mediaDevices.getUserMedia({ video: true })
        .then(function (stream) {
            video.srcObject = stream;
            video.play();
        })
        .catch(function (err0r) {
            console.log(err0r)
            console.log("Something went wrong!");
        });
    }

    let src = new cv.Mat(video.height, video.width, cv.CV_8UC4);
    let dst = new cv.Mat(video.height, video.width, cv.CV_8UC1);
    let cap = new cv.VideoCapture(video);

    const FPS = 22;

    setInterval(() => {
        cap.read(src);

        var type = "image/png"
        var data = document.getElementById("canvasOutput").toDataURL(type);
        data = data.replace('data:' + type + ';base64,', ''); //split off junk 
        at the beginning

        socket.emit('image', data);
    }, 10000/FPS);


    socket.on('response_back', function(image){
        const image_id = document.getElementById('image');
        image_id.src = image;
    });

</script>

此外,websockets在安全来源上运行。

答案 1 :(得分:0)

我不得不稍微调整一下您的解决方案:-

我评论了三个cv变量和cap.read(src)语句,修改了以下行

var data = document.getElementById("canvasOutput").toDataURL(type);

        var video_element = document.getElementById("videoElement")
        var frame = capture(video_element, 1)
        var data = frame.toDataURL(type);

从此处使用捕获功能:-http://appcropolis.com/blog/web-technology/using-html5-canvas-to-capture-frames-from-a-video/

我不确定这是否是正确的方法,但它确实对我有用。

就像我说的那样,我对javascript不太满意,因此与其在javascript中操作base64字符串,不如从javascript发送整个数据并以python的方式解析

# Important to only split once
headers, image = base64_image.split(',', 1) 

我对此的看法是,您可能无法直接从包含视频元素的画布中拉出图像字符串,而需要创建一个新的画布,在其上绘制2D图像,这可能会听起来有点圆形。从视频元素捕获的帧的图像。