使用AVFoundation读取视频像素的亮度值(Y')

时间:2012-02-07 18:56:03

标签: iphone objective-c video

我找到了一个代码示例,可以访问视频文件的像素数据。我可以使用一些帮助来弄清楚如何获得每个像素的亮度(Y')值。谢谢!

这是我的代码:

- (void) readMovie:(NSURL *)url
{

    [self performSelectorOnMainThread:@selector(updateInfo:) withObject:@"scanning" waitUntilDone:YES];

    startTime = [NSDate date];

    AVURLAsset * asset = [AVURLAsset URLAssetWithURL:url options:nil];

    [asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler:
     ^{
         dispatch_async(dispatch_get_main_queue(),
                        ^{



                            AVAssetTrack * videoTrack = nil;
                            NSArray * tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
                            if ([tracks count] == 1)
                            {
                                videoTrack = [tracks objectAtIndex:0];

                                videoDuration = CMTimeGetSeconds([videoTrack timeRange].duration);

                                NSError * error = nil;

                                // _movieReader is a member variable
                                _movieReader = [[AVAssetReader alloc] initWithAsset:asset error:&error];
                                if (error)
                                    NSLog(@"%@", error.localizedDescription);       

                                NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
                                NSNumber* value = [NSNumber numberWithUnsignedInt: kCVPixelFormatType_32BGRA];
                                NSDictionary* videoSettings =                                 [NSDictionary dictionaryWithObject:value forKey:key]; 

                                AVAssetReaderTrackOutput* output = [AVAssetReaderTrackOutput 
                                                         assetReaderTrackOutputWithTrack:videoTrack 
                                                         outputSettings:videoSettings];
                                output.alwaysCopiesSampleData = NO;

                                [_movieReader addOutput:output];

                                if ([_movieReader startReading])
                                {
                                    NSLog(@"reading started");

                                    [self readNextMovieFrame];
                                }
                                else
                                {
                                    NSLog(@"reading can't be started");
                                }
                            }
                        });
     }];
}


- (void) readNextMovieFrame
{
    //NSLog(@"readNextMovieFrame called");
    if (_movieReader.status == AVAssetReaderStatusReading)
    {
        //NSLog(@"status is reading");

        AVAssetReaderTrackOutput * output = [_movieReader.outputs objectAtIndex:0];
        CMSampleBufferRef sampleBuffer = [output copyNextSampleBuffer];
        if (sampleBuffer)
        { // I'm guessing this is the expensive part that we can skip if we want to skip frames
            CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

            // Lock the image buffer
            CVPixelBufferLockBaseAddress(imageBuffer,0); 

            // Get information of the image
            uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
            size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
            size_t width = CVPixelBufferGetWidth(imageBuffer);
            size_t height = CVPixelBufferGetHeight(imageBuffer); 

            //
            //  Here's where you can process the buffer!
            //  (your code goes here)
            //
            //  Finish processing the buffer!
            //

            // Unlock the image buffer
            CVPixelBufferUnlockBaseAddress(imageBuffer,0);
            CFRelease(sampleBuffer);


            [self readNextMovieFrame];
        }
        else
        {
            NSLog(@"could not copy next sample buffer. status is %d", _movieReader.status);

            NSTimeInterval scanDuration = -[startTime timeIntervalSinceNow];

            float scanMultiplier = videoDuration / scanDuration;

            NSString* info = [NSString stringWithFormat:@"Done\n\nvideo duration: %f seconds\nscan duration: %f seconds\nmultiplier: %f", videoDuration, scanDuration, scanMultiplier];

            [self performSelectorOnMainThread:@selector(updateInfo:) withObject:info waitUntilDone:YES];
        }


    }
    else
    {
        NSLog(@"status is now %d", _movieReader.status);


    }

}


- (void) updateInfo: (id*)message
{
    NSString* info = [NSString stringWithFormat:@"%@", message];

    [infoTextView setText:info];
}

1 个答案:

答案 0 :(得分:1)

由于您要求使用kCVPixelFormatType_32BGRA,因此您的样本将没有亮度信息。您可能需要从BGR值计算它,或者尝试以YUV格式获取样本,例如kCVPixelFormatType_420YpCbCr8Planar。

更新

如果你有YUV平面格式,你的'baseAddress'变量将指向顶行最左边像素的亮度值。 'width'和'height'告诉你每行的像素数和行数。