录制和合并视频以进行屏幕捕获

时间:2014-08-13 17:28:20

标签: ios

我目前正在使用AVPlayerItemVideoOutput抓取视频的帧。我使用CADisplayLink从输出中抓取帧。然后我将像素缓冲区传递给资产编写者。我是这样做的:

- (void)displayLinkCallback:(CADisplayLink *)sender
{
    CMTime outputItemTime = kCMTimeInvalid;

    // Calculate the nextVsync time which is when the screen will be refreshed next.
    CFTimeInterval nextVSync = (sender.timestamp + sender.duration);

    outputItemTime = [self.videoOutput itemTimeForHostTime:nextVSync];
    if (self.playerOne.playerAsset.playable) {
        if ([[self videoOutput] hasNewPixelBufferForItemTime:outputItemTime] && self.newSampleReady) {
            dispatch_async(self.captureSessionQueue, ^{
                CVPixelBufferRelease(self.lastPixelBuffer);
                self.lastPixelBuffer = [self.videoOutput copyPixelBufferForItemTime:outputItemTime itemTimeForDisplay:NULL];
                CMTime fpsTime = CMTimeMake(1, 24);
                self.currentVideoTime = CMTimeAdd(self.currentVideoTime, fpsTime);
                [_assetWriterInputPixelBufferAdaptor appendPixelBuffer:self.lastPixelBuffer withPresentationTime:self.currentVideoTime];
                self.newSampleReady = NO;
            });
        }
    }
}

这使我可以实时切换视频并继续进行屏幕录制。但是我也希望切换到两个玩家的分割视图,从玩家手中抓取每个帧并将它们合并为一个视频。 AVComposition可以工作,除了你必须事先知道你想要合并的轨道和时间。我的屏幕捕获程序允许用户在单视图和拆分视图之间自由切换。有没有办法获得像素缓冲区并使用它们将录音合并为一个视频?

我尝试通过拍摄第一个像素缓冲区并创建两个图像,组合它们然后创建一个新的像素缓冲区然后传回资产编写器,但我只是得到一个黑屏视频。这是我的代码:

-(CVPixelBufferRef)pixelBufferToCGImageRef:(CVPixelBufferRef)pixelBuffer withSecond:(CVPixelBufferRef)pixelBuffer2
{
    CIContext *temporaryContext = [CIContext contextWithOptions:nil];

    CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];

    UIImage *im1 = [UIImage imageWithCIImage:ciImage];

    UIImage *im2 = [UIImage imageWithCIImage:ciImage];

    CGSize newSize = CGSizeMake(640, 480);

    UIGraphicsBeginImageContext( newSize );

    [im1 drawInRect:CGRectMake(0,0,newSize.width/2,newSize.height)];

    [im2 drawInRect:CGRectMake(newSize.width/2,0,newSize.width/2,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];

    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();

    CIImage *newCIImage = [newImage CIImage];

    UIGraphicsEndImageContext();

    CVPixelBufferRef pbuff = NULL;
    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             nil];
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
                                          640,
                                          480,
                                          kCVPixelFormatType_32ARGB,
                                          (__bridge CFDictionaryRef)(options),
                                          &pbuff);
    if (status == kCVReturnSuccess) {
        [temporaryContext render:newCIImage
                 toCVPixelBuffer:pbuff
                          bounds:CGRectMake(0, 0, 640, 480)
                      colorSpace:nil];

    } else {
        NSLog(@"Failed create pbuff");
    }

    return pbuff;
}

有什么建议吗?

1 个答案:

答案 0 :(得分:0)

我得到的黑屏是因为ciImage在我在模拟器上得到之后就变成了nil。如果我在设备上运行代码,那么它可以工作。