渲染QCRenderer以显示DeckLink卡

时间:2015-01-30 09:48:34

标签: objective-c opengl quartz-graphics

我正在MacMini上用xcode编写应用程序(2012年末)。

这是一个我加载一些QuartzComposer文件的应用程序(使用QCRenderer类)。然后将这些文件渲染到视频记忆中并使用glReadPixels读取它们以获取所有像素数据。然后将该像素数据推送到decklink框架(我使用BlackMagic Decklink SDK),以便在DeckLink Quad上播放。一切都很好。它甚至可以渲染3个输出而不会丢弃HD(1080i50)上的帧。但过了一段时间(比如5分钟),即使我只渲染1个输出,也会丢帧。

所以我认为有两个可能的原因。第一。当有一个完成的帧(帧dit播放)回调时,我从SDK接收bmdOutputFrameDisplayedLate,这意味着该帧在计划时没有播放。所以当发生这种情况时,我将把下一帧推向未来。

二。我已经设置了frameBuffer Size(在开始播放之前会渲染出3帧)。所以也许在一段时间之后渲染落后于调度,这将导致丢弃/延迟帧。也许我没有像它应该那样进行openGL渲染过程?

所以这是我的代码:

- >首先我将QCRenderer加载到内存中

- (id)initWithPath:(NSString*)path forResolution:(NSSize)resolution
{
    if (self = [super init])
    {        
        NSOpenGLPixelFormatAttribute    attributes[] = {
            NSOpenGLPFAPixelBuffer,
            NSOpenGLPFANoRecovery,
            NSOpenGLPFAAccelerated,
            NSOpenGLPFADepthSize, 24,
            (NSOpenGLPixelFormatAttribute) 0
        };

        NSOpenGLPixelFormat* format = [[NSOpenGLPixelFormat alloc] initWithAttributes:attributes];

        quartzPixelBuffer = nil;

        quartzPixelBuffer = [[NSOpenGLPixelBuffer alloc] initWithTextureTarget:GL_TEXTURE_2D textureInternalFormat:GL_RGBA textureMaxMipMapLevel:0 pixelsWide:resolution.width pixelsHigh:resolution.height];

        if(quartzPixelBuffer == nil)
        {
            NSLog(@"Cannot create OpenGL pixel buffer");
        }

        //Create the OpenGL context to render with (with color and depth buffers)
        quartzOpenGLContext = [[NSOpenGLContext alloc] initWithFormat:format shareContext:nil];
        if(quartzOpenGLContext == nil)
        {
            NSLog(@"Cannot create OpenGL context");
        }

        [quartzOpenGLContext setPixelBuffer:quartzPixelBuffer cubeMapFace:0 mipMapLevel:0 currentVirtualScreen:[quartzOpenGLContext currentVirtualScreen]];

        //Create the QuartzComposer Renderer with that OpenGL context and the specified composition file
        NSString* correctPath = [path substringWithRange:NSMakeRange(0, path.length - 1)];

        quartzRenderer = [[QCRenderer alloc] initWithOpenGLContext:quartzOpenGLContext pixelFormat:format file:correctPath];

        if(quartzRenderer == nil)
        {
            NSLog(@"Cannot create QCRenderer");
        }
    }
    return self;
}

- >下一步是在开始播放之前渲染3帧(当前BUFFER_DEPTH设置为3)

- (void) preRollFrames;
{
    // reset scheduled
    [self resetScheduled];
    totalFramesScheduled = 0;

    if (isRunning == TRUE)
    {
        [self stopPlayback];
    }

    @autoreleasepool
    {
        for (double i = 0.0; i < ((1.0 / framesPerSecond) * BUFFER_DEPTH); i += 1.0/framesPerSecond)
        {
            // render image at given time
            [self createVideoFrame:TRUE];
        }
    }
}

- &GT;这是createVideoFrame函数。当scheduleBlack设置为true时,必须呈现黑框。如果为false,则调用QCRenderer类的函数renderFrameAtTime。然后将此函数的返回传递给decklinkVideoFrame对象。接下来,此框架将被推入DeckLink卡(SDK)的计划队列中。

- (void) createVideoFrame:(BOOL)schedule
{
    @autoreleasepool
    {
        // get displaymode
        IDeckLinkDisplayMode* decklinkdisplaymode = (IDeckLinkDisplayMode*)CurrentRes;

        // create new videoframe on output
        if (deckLinkOutput->CreateVideoFrame((int)decklinkdisplaymode->GetWidth(), (int)decklinkdisplaymode->GetHeight(), (int)decklinkdisplaymode->GetWidth() * 4, bmdFormat8BitARGB, bmdFrameFlagFlipVertical, &videoFrame) != S_OK)
        {
            // failed to create new video frame on output
            // display terminal message
            sendMessageToTerminal = [[mogiTerminalMessage alloc] initWithSendNotification:@"terminalErrorMessage" forMessage:[NSString stringWithFormat:@"DeckLink: Output %d -> Failed to create new videoframe", outputID]];
        }

        unsigned frameBufferRowBytes = ((int)decklinkdisplaymode->GetWidth() * 4 + 63) & ~63;
        void* frameBufferPtr = valloc((int)decklinkdisplaymode->GetHeight() * frameBufferRowBytes);

        // set videoframe pointer
        if (videoFrame != NULL)
        {
            videoFrame->GetBytes((void**)&frameBufferPtr);
        }

        // fill pointer with pixel data
        if (scheduleBlack == TRUE)
        {
            [qClear renderFrameAtTime:1.0 forBuffer:(void**)frameBufferPtr forScreen:0];

            // render first frame qRenderer
            if (qRender != NULL)
            {
                [qRender renderFirstFrame];
            }
        }
        else
        {
            [qRender renderFrameAtTime:totalSecondsScheduled forBuffer:(void**)frameBufferPtr forScreen:screenID];
            schedule = TRUE;
        }

        // if playback -> schedule frame
        if (schedule == TRUE)
        {
            // schedule frame
            if (videoFrame != NULL)
            {
                if (deckLinkOutput->ScheduleVideoFrame(videoFrame, (totalFramesScheduled * frameDuration), frameDuration, frameTimescale) != S_OK)
                {
                    // failed to schedule new frame
                    // display message to terminal
                    sendMessageToTerminal = [[mogiTerminalMessage alloc] initWithSendNotification:@"terminalErrorMessage" forMessage:[NSString stringWithFormat:@"DeckLink: Output %d -> Failed to schedule new videoframe", outputID]];
                }
                else
                {
                    // increase totalFramesScheduled
                    totalFramesScheduled ++;

                    // increase totalSecondsScheduled
                    totalSecondsScheduled += 1.0/framesPerSecond;
                }

                // clear videoframe
                videoFrame->Release();
                videoFrame = NULL;
            }
        }
    }
}

- &GT;从QCRenderer类中渲染frameAtTime函数

- (void) renderFrameAtTime:(double)time forBuffer:(void*)frameBuffer forScreen:(int)screen
{
    @autoreleasepool
    {
        CGLContextObj cgl_ctx = [quartzOpenGLContext CGLContextObj];

        // render frame at time
        [quartzRenderer renderAtTime:time arguments:NULL];

        glReadPixels(0, 0, [quartzPixelBuffer pixelsWide], [quartzPixelBuffer pixelsHigh], GL_BGRA, GL_UNSIGNED_INT_8_8_8_8, frameBuffer);
    }
}

- &GT;在perolling帧之后开始播放。每次出现一帧画面。调用此Callback方法(Decklink SDK)。如果有一个晚期的框架。我将totalFrames 1帧推向未来。

PlaybackDelegate::PlaybackDelegate (DecklinkDevice* owner)
{
    pDecklinkDevice = owner;
}

HRESULT     PlaybackDelegate::ScheduledFrameCompleted (IDeckLinkVideoFrame* completedFrame, BMDOutputFrameCompletionResult result)
{
    if (result == bmdOutputFrameDisplayedLate)
    {
        // if displayed late bump scheduled time further into the future by one frame
        [pDecklinkDevice increaseScheduledFrames];
        NSLog(@"bumped %d", [pDecklinkDevice getOutputID]);
    }

    if ([pDecklinkDevice getIsRunning] == TRUE)
    {
        [pDecklinkDevice createVideoFrame:TRUE];
    }

    return S_OK;
}

所以我的问题。我正在做openGL渲染过程吗?也许这会导致几分钟后的延迟。或者我处理的displayLate框架不正确,所以调度队列的时间在一段时间后搞砸了?

THX! 托马斯

1 个答案:

答案 0 :(得分:2)

迟到时,请尝试根据ScheduledFrameCompleted回调指定的完成结果推进帧计数器。考虑额外增加两个。

至少在Windows上,多年来,只有工作站主板在使用NVidia产品时提供无节制的像素回读。我的iMac有一块GeForce系列卡,但我没有测量它的性能。如果glReadPixels被限制,我不会感到惊讶。

另请尝试使用GL_BGRA_EXT和GL_UNSIGNED_INT_8_8_8_8_REV。

您应该拥有glReadPixels的精确计时指标和硬件写入。我假设你正在回读渐进式或隔行扫描帧而不是字段。理想情况下,像素回读应小于10毫秒。显然,整个渲染周期需要比视频硬件的帧速率更快。