在OS X上使用AVFoundation的H.264视频流?

时间:2015-05-08 14:59:23

标签: macos video-streaming avfoundation h.264 video-encoding

基于this stack overflow question和Apple的Direct Access to Video Encoding and Decoding,来自WWDC 2014,我制作了一个小型Xcode项目,演示如何解码和显示H.264流 AVFoundation 。该项目可在github here上找到。

我已将一系列1000个NALU转储到文件中,并将它们包含在项目中( nalu_000.bin ... nalu_999.bin )。

解析NALU并将它们传输到 AVSampleBufferDisplayLayer 的代码的有趣部分包括在下面:

@import AVKit;

typedef enum {
    NALUTypeSliceNoneIDR = 1,
    NALUTypeSliceIDR = 5,
    NALUTypeSPS = 7,
    NALUTypePPS = 8
} NALUType;

@interface ViewController ()

@property (nonatomic, strong, readonly) VideoView * videoView;
@property (nonatomic, strong) NSData * spsData;
@property (nonatomic, strong) NSData * ppsData;
@property (nonatomic) CMVideoFormatDescriptionRef videoFormatDescr;
@property (nonatomic) BOOL videoFormatDescriptionAvailable;

@end

@implementation ViewController

- (VideoView *)videoView {
    return (VideoView *) self.view;
}

- (instancetype)initWithCoder:(NSCoder *)coder {
    self = [super initWithCoder:coder];

    if (self) {
        _videoFormatDescriptionAvailable = NO;
    }

    return self;
}

- (int)getNALUType:(NSData *)NALU {
    uint8_t * bytes = (uint8_t *) NALU.bytes;

    return bytes[0] & 0x1F;
}

- (void)handleSlice:(NSData *)NALU {
    if (self.videoFormatDescriptionAvailable) {
        /* The length of the NALU in big endian */
        const uint32_t NALUlengthInBigEndian = CFSwapInt32HostToBig((uint32_t) NALU.length);

        /* Create the slice */
        NSMutableData * slice = [[NSMutableData alloc] initWithBytes:&NALUlengthInBigEndian length:4];

        /* Append the contents of the NALU */
        [slice appendData:NALU];

        /* Create the video block */
        CMBlockBufferRef videoBlock = NULL;

        OSStatus status;

        status =
            CMBlockBufferCreateWithMemoryBlock
                (
                    NULL,
                    (void *) slice.bytes,
                    slice.length,
                    kCFAllocatorNull,
                    NULL,
                    0,
                    slice.length,
                    0,
                    & videoBlock
                );

        NSLog(@"BlockBufferCreation: %@", (status == kCMBlockBufferNoErr) ? @"successfully." : @"failed.");

        /* Create the CMSampleBuffer */
        CMSampleBufferRef sbRef = NULL;

        const size_t sampleSizeArray[] = { slice.length };

        status =
            CMSampleBufferCreate
                (
                    kCFAllocatorDefault,
                    videoBlock,
                    true,
                    NULL,
                    NULL,
                    _videoFormatDescr,
                    1,
                    0,
                    NULL,
                    1,
                    sampleSizeArray,
                    & sbRef
                );

        NSLog(@"SampleBufferCreate: %@", (status == noErr) ? @"successfully." : @"failed.");

        /* Enqueue the CMSampleBuffer in the AVSampleBufferDisplayLayer */
        CFArrayRef attachments = CMSampleBufferGetSampleAttachmentsArray(sbRef, YES);
        CFMutableDictionaryRef dict = (CFMutableDictionaryRef)CFArrayGetValueAtIndex(attachments, 0);
        CFDictionarySetValue(dict, kCMSampleAttachmentKey_DisplayImmediately, kCFBooleanTrue);

        NSLog(@"Error: %@, Status: %@",
              self.videoView.videoLayer.error,
                (self.videoView.videoLayer.status == AVQueuedSampleBufferRenderingStatusUnknown)
                    ? @"unknown"
                    : (
                        (self.videoView.videoLayer.status == AVQueuedSampleBufferRenderingStatusRendering)
                            ? @"rendering"
                            :@"failed"
                      )
             );

        dispatch_async(dispatch_get_main_queue(),^{
            [self.videoView.videoLayer enqueueSampleBuffer:sbRef];
            [self.videoView.videoLayer setNeedsDisplay];
        });

        NSLog(@" ");
    }
}

- (void)handleSPS:(NSData *)NALU {
    _spsData = [NALU copy];
}

- (void)handlePPS:(NSData *)NALU {
    _ppsData = [NALU copy];
}

- (void)updateFormatDescriptionIfPossible {
    if (_spsData != nil && _ppsData != nil) {
        const uint8_t * const parameterSetPointers[2] = {
            (const uint8_t *) _spsData.bytes,
            (const uint8_t *) _ppsData.bytes
        };

        const size_t parameterSetSizes[2] = {
            _spsData.length,
            _ppsData.length
        };

        OSStatus status =
            CMVideoFormatDescriptionCreateFromH264ParameterSets
                (
                    kCFAllocatorDefault,
                    2,
                    parameterSetPointers,
                    parameterSetSizes,
                    4,
                    & _videoFormatDescr
                );

        _videoFormatDescriptionAvailable = YES;

        NSLog(@"Updated CMVideoFormatDescription. Creation: %@.", (status == noErr) ? @"successfully." : @"failed.");
    }
}

- (void)parseNALU:(NSData *)NALU {
    int type = [self getNALUType: NALU];

    NSLog(@"NALU with Type \"%@\" received.", naluTypesStrings[type]);

    switch (type)
    {
        case NALUTypeSliceNoneIDR:
        case NALUTypeSliceIDR:
            [self handleSlice:NALU];
            break;
        case NALUTypeSPS:
            [self handleSPS:NALU];
            [self updateFormatDescriptionIfPossible];
            break;
        case NALUTypePPS:
            [self handlePPS:NALU];
            [self updateFormatDescriptionIfPossible];
            break;
        default:
            break;
    }
}

- (IBAction)streamVideo:(id)sender {
    NSBundle * mainBundle = [NSBundle mainBundle];

    for (int k = 0; k < 1000; k++) {
        NSString * resource = [NSString stringWithFormat:@"nalu_%03d", k];
        NSString * path = [mainBundle pathForResource:resource ofType:@"bin"];
        NSData * NALU = [NSData dataWithContentsOfFile:path];
        [self parseNALU:NALU];
    }
}

@end

基本上代码的工作原理如下:

  1. 它使用 CMVideoFormatDescriptionCreateFromH264ParameterSets 从SPS和PPS NALU创建 CMVideoFormatDescriptionRef
  2. 根据AVCC格式重新打包NALU。由于已经删除了NALU起始码,因此它只是一个4字节的NALU长度标题(在big-endian中)。
  3. 它将所有VLC NALU帧打包为 CMSampleBuffers 并将其提供给 AVSampleBufferDisplayLayer
  4. 代码似乎正确地读取了SPS和PPS参数集。不幸的是,当将 CMSampleBuffers 提供给 AVSampleBufferDisplayLayer 时出现问题。对于每个帧,Xcode在日志窗口中转储以下消息(程序不会崩溃):

    [16:05:22.533] <<<< VMC >>>> vmc2PostDecodeError: posting DecodeError (-8969) -- PTS was nan = 0/0
    [16:05:22.534] vtDecompressionDuctDecodeSingleFrame signalled err=-8969 (err) (VTVideoDecoderDecodeFrame returned error) at /SourceCache/CoreMedia_frameworks/CoreMedia-1562.235/Sources/VideoToolbox/VTDecompressionSession.c line 3241
    [16:05:22.535] <<<< VMC >>>> vmc2DequeueAndDecodeFrame: frame failed - err -8969
    

    此外,框架看起来像奇怪的莫奈画作:

    Monet 2

    我不是H.264格式的专家(或者一般的视频编码/解码),如果有更好地掌握这个主题的人会看一下演示项目并且可能会指出我,我将不胜感激。正确的方向。

    我将来会在github上留下代码作为其他有兴趣在OS X / iOS上解码H.264的人的例子。

0 个答案:

没有答案