我可以使用AVFoundation将下载的视频帧流式传输到OpenGL ES纹理中吗?

时间:2012-09-19 18:06:32

标签: objective-c ios opengl-es streaming avfoundation

我已经能够使用AVFoundation的AVAssetReader类将视频帧上传到OpenGL ES纹理中。但它有一个caveat,因为当它与指向远程媒体的AVURLAsset一起使用时会失败。这个失败没有很好的记录,我想知道是否有任何解决方法的缺点。

1 个答案:

答案 0 :(得分:31)

我已经能够使用iOS 6发布的一些API来简化这个过程。它根本不使用AVAssetReader,而是依赖于名为AVPlayerItemVideoOutput的类。可以通过新的AVPlayerItem方法将此类的实例添加到任何-addOutput:实例。

AVAssetReader不同,此类适用于由远程AVPlayerItem支持的AVURLAsset,并且还具有允许支持更复杂的播放界面的优点通过-copyPixelBufferForItemTime:itemTimeForDisplay:进行非线性播放(而非AVAssetReader严格限制-copyNextSampleBuffer方法。


示例代码

// Initialize the AVFoundation state
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:someUrl options:nil];
[asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler:^{

    NSError* error = nil;
    AVKeyValueStatus status = [asset statusOfValueForKey:@"tracks" error:&error];
    if (status == AVKeyValueStatusLoaded)
    {
        NSDictionary* settings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] };
        AVPlayerItemVideoOutput* output = [[[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:settings] autorelease];
        AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:asset];
        [playerItem addOutput:[self playerItemOutput]];
        AVPlayer* player = [AVPlayer playerWithPlayerItem:playerItem];

        // Assume some instance variable exist here. You'll need them to control the
        // playback of the video (via the AVPlayer), and to copy sample buffers (via the AVPlayerItemVideoOutput).
        [self setPlayer:player];
        [self setPlayerItem:playerItem];
        [self setOutput:output];
    }
    else
    {
        NSLog(@"%@ Failed to load the tracks.", self);
    }
}];

// Now at any later point in time, you can get a pixel buffer
// that corresponds to the current AVPlayer state like this:
CVPixelBufferRef buffer = [[self output] copyPixelBufferForItemTime:[[self playerItem] currentTime] itemTimeForDisplay:nil];

获得缓冲区后,可以根据需要将其上传到OpenGL。我建议使用可怕的CVOpenGLESTextureCacheCreateTextureFromImage()函数,因为您将在所有较新的设备上获得硬件加速,这比<{1}} 更快。有关示例,请参阅Apple的GLCameraRippleRosyWriter演示。