Python + GStreamer:将视频缩放到窗口

时间:2013-11-04 09:44:34

标签: python pygame gstreamer python-gstreamer

我在将GStreamer的视频输出重新缩放到视频显示的窗口尺寸(保留视频的宽高比)时遇到了一些麻烦。问题是我首先需要预先录制视频,以便通过检索协商的大写来确定其尺寸,然后计算需要显示的尺寸以适合窗口。一旦我预先录制了视频并获得了尺寸上限,我就无法再更改视频的尺寸了。设置新上限仍会导致视频以原始大小输出。我该怎么做才能解决这个问题?

要完成。在当前的实现中,我无法渲染到可以轻松解决此问题的OpenGL纹理,因为您可以简单地将输出渲染到纹理并将其缩放以适合屏幕。我必须在pygame表面绘制输出,它需要具有正确的尺寸。 pygame确实提供了扩展其表面的功能,但我认为这样的实现(就像我现在所做的那样)比直接从GStreamer检索正确大小的帧要慢(我是对的吗?)

这是我加载和显示视频的代码(我省略了主循环内容):

def calcScaledRes(self, screen_res, image_res):
    """Calculate image size so it fits the screen
    Args
        screen_res (tuple)   -  Display window size/Resolution
        image_res (tuple)    -  Image width and height

    Returns
        tuple - width and height of image scaled to window/screen
    """
    rs = screen_res[0]/float(screen_res[1])
    ri = image_res[0]/float(image_res[1])

    if rs > ri:
        return (int(image_res[0] * screen_res[1]/image_res[1]), screen_res[1])
    else:
        return (screen_res[0], int(image_res[1]*screen_res[0]/image_res[0]))

def load(self, vfile):
    """
    Loads a videofile and makes it ready for playback

    Arguments:
    vfile -- the uri to the file to be played
    """
    # Info required for color space conversion (YUV->RGB)
    # masks are necessary for correct display on unix systems
    _VIDEO_CAPS = ','.join([
        'video/x-raw-rgb',
        'red_mask=(int)0xff0000',
        'green_mask=(int)0x00ff00',
        'blue_mask=(int)0x0000ff'
    ])

    self.caps = gst.Caps(_VIDEO_CAPS)

    # Create videoplayer and load URI
    self.player = gst.element_factory_make("playbin2", "player")        
    self.player.set_property("uri", vfile)

    # Enable deinterlacing of video if necessary
    self.player.props.flags |= (1 << 9)     

    # Reroute frame output to Python
    self._videosink = gst.element_factory_make('appsink', 'videosink')      
    self._videosink.set_property('caps', self.caps)
    self._videosink.set_property('async', True)
    self._videosink.set_property('drop', True)
    self._videosink.set_property('emit-signals', True)
    self._videosink.connect('new-buffer', self.__handle_videoframe)     
    self.player.set_property('video-sink', self._videosink)

    # Preroll movie to get dimension data
    self.player.set_state(gst.STATE_PAUSED)

    # If movie is loaded correctly, info about the clip should be available
    if self.player.get_state(gst.CLOCK_TIME_NONE)[0] == gst.STATE_CHANGE_SUCCESS:
        pads = self._videosink.pads()           
        for pad in pads:            
            caps = pad.get_negotiated_caps()[0]
            self.vidsize = caps['width'], caps['height']
        else:
            raise exceptions.runtime_error("Failed to retrieve video size")

# Calculate size of video when fit to screen
    self.scaledVideoSize = self.calcScaledRes((self.screen_width,self.screen_height), self.vidsize) 
# Calculate the top left corner of the video (to later center it vertically on screen)  
    self.vidPos = ((self.screen_width - self.scaledVideoSize [0]) / 2, (self.screen_height - self.scaledVideoSize [1]) / 2)

    # Add width and height info to video caps and reload caps
    _VIDEO_CAPS += ", width={0}, heigh={1}".format(self.scaledVideoSize[0], self.scaledVideoSize[1])
    self.caps = gst.Caps(_VIDEO_CAPS)
    self._videosink.set_property('caps', self.caps)  #??? not working, video still displayed in original size

def __handle_videoframe(self, appsink):
    """
    Callback method for handling a video frame

    Arguments:
    appsink -- the sink to which gst supplies the frame (not used)
    """     
    buffer = self._videosink.emit('pull-buffer')        

    img = pygame.image.frombuffer(buffer.data, self.vidsize, "RGB")

    # Upscale image to new surfuace if presented fullscreen
    # Create the surface if it doesn't exist yet and keep rendering to this surface
    # for future frames (should be faster)

    if not hasattr(self,"destSurf"):                
        self.destSurf = pygame.transform.scale(img, self.destsize)
    else:
        pygame.transform.scale(img, self.destsize, self.destSurf)
    self.screen.blit(self.destSurf, self.vidPos)

    # Swap the buffers
    pygame.display.flip()

    # Increase frame counter
    self.frameNo += 1

1 个答案:

答案 0 :(得分:0)

我很确定你的问题是(自你提出这个问题以来已经很长时间了)你从未上过总线来观察发出的消息。
这个代码通常是这样的:

    def some_function(self):
    #code defining Gplayer (the pipeline)
    #
    # here
        Gplayer.set_property('flags', self.GST_VIDEO|self.GST_AUDIO|self.GST_TEXT|self.GST_SOFT_VOLUME|self.GST_DEINTERLACE)
    # more code
    #
    # finally
        # Create the bus to listen for messages
        bus = Gplayer.get_bus()
        bus.add_signal_watch()
        bus.enable_sync_message_emission()
        bus.connect('message', self.OnBusMessage)
        bus.connect('sync-message::element', self.OnSyncMessage)
#   Listen for gstreamer bus messages
    def OnBusMessage(self, bus, message):
        t = message.type
        if t == Gst.MessageType.ERROR:
            pass
        elif t == Gst.MessageType.EOS:
            print ("End of Audio")
        return True

    def OnSyncMessage(self, bus, msg):
        if msg.get_structure() is None:
            return True
        if message_name == 'prepare-window-handle':
            imagesink = msg.src
            imagesink.set_property('force-aspect-ratio', True)
            imagesink.set_window_handle(self.panel1.GetHandle())    

您的问题的关键位是设置同步消息的回叫,并在该回叫中将属性force-aspect-ratio设置为True。 此属性可确保视频始终适合正在显示的窗口。

注意self.panel1.GetHandle()函数命名用于显示视频的面板。

我很欣赏你会继续前进,但希望这会有助于其他人在档案中搜寻。