阅读目标c

时间:2016-02-19 13:00:31

标签: objective-c macos video avfoundation interlacing

我正在尝试查看用户选择的视频文件是隔行/逐行扫描,然后根据它进行一些操作。

我试图检查我提取的cmsamplebuffer是先定义为top field还是bottom field,但是这会为所有输入返回null。

NSMutableDictionary *pixBuffAttributes = [[NSMutableDictionary alloc] init];
[pixBuffAttributes setObject:
 [NSNumber numberWithInt:kCVPixelFormatType_422YpCbCr8]
 forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
myAsset = [[AVURLAsset alloc] initWithURL:urlpath options:pixBuffAttributes];
myAssetReader = [[AVAssetReader alloc] initWithAsset:myAsset error:nil];
myAssetOutput = [[AVAssetReaderTrackOutput alloc]initWithTrack:
                 [[myAsset tracksWithMediaType:AVMediaTypeVideo]
                 objectAtIndex: 0]
                outputSettings:pixBuffAttributes];
[myAssetReader addOutput:myAssetOutput];
[myAssetReader startReading];
CMSampleBufferRef ref = [myAssetOutput copyNextSampleBuffer];
if(CVBufferGetAttachments(ref, kCVImageBufferFieldDetailKey, nil) == nil)
{
    //always the case
}
else
{
    //never happens
}

无论输入文件的隔行扫描如何,总是返回nil。我可能试图在完全错误的庄园中测试这个,所以任何帮助都非常赞赏!

1 个答案:

答案 0 :(得分:0)

感谢jeschot对苹果开发者回答这个问题,https://forums.developer.apple.com/thread/39029 如果其他人试图采用类似的方式来获取视频中的大部分元数据:

    CGSize inputsize = [[myAsset tracksWithMediaType:AVMediaTypeVideo][0] naturalSize];

    properties->m_frame_rows = inputsize.height;
    properties->m_pixel_cols = inputsize.width;

    CFNumberRef fieldCount = CMFormatDescriptionGetExtension((CMFormatDescriptionRef)[myAsset tracksWithMediaType:AVMediaTypeVideo][0].formatDescriptions[0], kCMFormatDescriptionExtension_FieldCount);

    if([(NSNumber*) fieldCount integerValue] == 1)
    {
        properties->m_interlaced = false;
        properties->m_fld2_upper = false;
    }
    else
    {
        properties->m_interlaced = true;

        CFPropertyListRef interlace = CMFormatDescriptionGetExtension((CMFormatDescriptionRef)[myAsset tracksWithMediaType:AVMediaTypeVideo][0].formatDescriptions[0], kCMFormatDescriptionExtension_FieldDetail);

        if(interlace == kCMFormatDescriptionFieldDetail_SpatialFirstLineEarly)
        {
            properties->m_fld2_upper = false;
        }

        if(interlace == kCMFormatDescriptionFieldDetail_SpatialFirstLineLate)
        {
            properties->m_fld2_upper = true;
        }

        if(interlace == kCMFormatDescriptionFieldDetail_TemporalBottomFirst)
        {
            properties->m_fld2_upper = true;
        }

        if(interlace == kCMFormatDescriptionFieldDetail_TemporalTopFirst)
        {
            properties->m_fld2_upper = false;
        }
    }

    CMTime minDuration = [myAsset tracksWithMediaType:AVMediaTypeVideo][0].minFrameDuration;

    int64_t fpsNumerator = minDuration.value;
    int32_t fpsDenominator = minDuration.timescale;

    properties->m_ticks_duration = (unsigned int) fpsNumerator;
    if (properties->m_interlaced)
    {
        properties->m_ticks_per_second = fpsDenominator * 2;
    }
    else
    {
        properties->m_ticks_per_second = fpsDenominator;
    }

并且对于其他任何对此感到困惑的人来说,如果容器具有小于全分辨率的清洁apeture之类的元数据,则自然尺寸并不总是图像的完整分辨率。目前正试图解决这个问题,但这是一个不同的问题!

<强>更新

我发现自然尺寸是显示分辨率。找到解码第一帧所需的编码分辨率,并检查所获得的缓冲区对象的分辨率。在诸如像素纵横比!= 1或(如上所述)清洁的情况下,这些可以是不同的。