使用gstreamer将YUVj420p像素格式转换为RGB888

时间:2014-07-04 12:44:36

标签: opencv rgb gstreamer yuv

我正在使用gstreamer 1.2将帧从我的IP摄像机送到opencv程序

流是(640 * 368 YUVj420p),我想将其转换为RBG888,以便能够在我的opencv程序中使用它

那么有没有办法使用gstreamer进行转换?

还是我自己必须这样做?

如果是这样,请告诉我进行此转换的等式

2 个答案:

答案 0 :(得分:3)

经过gstreamer的一些试验后,我决定自己进行转换并且有效

首先我们必须了解YUVj420p像素格式

enter image description here

如上图所示,Y'UV420中的Y',U和V分量在顺序块中分别编码。为每个像素存储Y'值,然后为每个2×2方形像素块存储U值,最后为每个2×2块存储V值。相应的Y',U和V值使用上图中的相同颜色显示。作为来自器件的字节流逐行读取,Y'块将在位置0处找到,U块在位置x×y处(在该示例中为6×4 = 24)和位于x处的V块×y +(x×y)/ 4(这里,6×4 +(6×4)/ 4 = 30)。(复制)

这是执行它的代码(python)

此代码将说明如何使用gstreamer将帧注入opencv并进行转换

import gi
gi.require_version('Gst', '1.0')
from gi.repository import GObject, Gst
import numpy as np
import cv2

GObject.threads_init()
Gst.init(None)

def YUV_stream2RGB_frame(data):

    w=640
    h=368
    size=w*h

    stream=np.fromstring(data,np.uint8) #convert data form string to numpy array

    #Y bytes  will start form 0 and end in size-1 
    y=stream[0:size].reshape(h,w) # create the y channel same size as the image

    #U bytes will start from size and end at size+size/4 as its size = framesize/4 
    u=stream[size:(size+(size/4))].reshape((h/2),(w/2))# create the u channel its size=framesize/4

    #up-sample the u channel to be the same size as the y channel and frame using pyrUp func in opencv2
    u_upsize=cv2.pyrUp(u)

    #do the same for v channel 
    v=stream[(size+(size/4)):].reshape((h/2),(w/2))
    v_upsize=cv2.pyrUp(v)

    #create the 3-channel frame using cv2.merge func watch for the order
    yuv=cv2.merge((y,u_upsize,v_upsize))

    #Convert TO RGB format
    rgb=cv2.cvtColor(yuv,cv2.cv.CV_YCrCb2RGB)

    #show frame
    cv2.imshow("show",rgb)
    cv2.waitKey(5)

def on_new_buffer(appsink):

   sample = appsink.emit('pull-sample')
   #get the buffer
   buf=sample.get_buffer()
   #extract data stream as string
   data=buf.extract_dup(0,buf.get_size())
   YUV_stream2RGB_frame(data)
   return False

def Init():

   CLI="rtspsrc name=src location=rtsp://192.168.1.20:554/live/ch01_0 latency=10 !decodebin ! appsink name=sink"

   #simplest way to create a pipline
   pipline=Gst.parse_launch(CLI)

   #getting the sink by its name set in CLI
   appsink=pipline.get_by_name("sink")

   #setting some important properties of appsnik
   appsink.set_property("max-buffers",20) # prevent the app to consume huge part of memory
   appsink.set_property('emit-signals',True) #tell sink to emit signals
   appsink.set_property('sync',False) #no sync to make decoding as fast as possible

   appsink.connect('new-sample', on_new_buffer) #connect signal to callable func

def run():
    pipline.set_state(Gst.State.PLAYING)
    GObject.MainLoop.run()


Init()
run()

答案 1 :(得分:1)

你是如何从相机中获取相框的?以及如何将其注入opencv应用程序?

假设你在gstreamer之外得到你的帧,你应该使用如下的管道:

appsrc caps =" video / x-raw,format = I420,width = 640,height = 368" !视频转换! capsfilter caps =" video / x-raw,format = RGB" ! appsink

然后使用appsrc注入数据并使用appsink将其接收回来。如果您从http或v4l2从相机获取数据,则可以使用souphttpsrc或v4l2src替换appsrc。