在iOS上处理视频时跳过帧

时间:2012-02-11 18:13:22

标签: iphone objective-c ios video avfoundation

我正在尝试处理本地视频文件,只是对像素数据进行一些分析。什么都没有输出。我当前的代码遍历视频的每一帧,但我实际上希望一次跳过~15帧以加快速度。有没有办法跳过帧而不解码它们?

在Ffmpeg中,我只需调用av_read_frame而无需调用avcodec_decode_video2。

提前致谢!这是我目前的代码:

- (void) readMovie:(NSURL *)url
{

    [self performSelectorOnMainThread:@selector(updateInfo:) withObject:@"scanning" waitUntilDone:YES];

    startTime = [NSDate date];

    AVURLAsset * asset = [AVURLAsset URLAssetWithURL:url options:nil];

    [asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler:
     ^{
         dispatch_async(dispatch_get_main_queue(),
                        ^{



                            AVAssetTrack * videoTrack = nil;
                            NSArray * tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
                            if ([tracks count] == 1)
                            {
                                videoTrack = [tracks objectAtIndex:0];

                                videoDuration = CMTimeGetSeconds([videoTrack timeRange].duration);

                                NSError * error = nil;

                                // _movieReader is a member variable
                                _movieReader = [[AVAssetReader alloc] initWithAsset:asset error:&error];
                                if (error)
                                    NSLog(@"%@", error.localizedDescription);       

                                NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
                                NSNumber* value = [NSNumber numberWithUnsignedInt: kCVPixelFormatType_420YpCbCr8Planar];

                                NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 

                                AVAssetReaderTrackOutput* output = [AVAssetReaderTrackOutput 
                                                         assetReaderTrackOutputWithTrack:videoTrack 
                                                         outputSettings:videoSettings];
                                output.alwaysCopiesSampleData = NO;

                                [_movieReader addOutput:output];

                                if ([_movieReader startReading])
                                {
                                    NSLog(@"reading started");

                                    [self readNextMovieFrame];
                                }
                                else
                                {
                                    NSLog(@"reading can't be started");
                                }
                            }
                        });
     }];
}


- (void) readNextMovieFrame
{
    //NSLog(@"readNextMovieFrame called");
    if (_movieReader.status == AVAssetReaderStatusReading)
    {
        //NSLog(@"status is reading");

        AVAssetReaderTrackOutput * output = [_movieReader.outputs objectAtIndex:0];
        CMSampleBufferRef sampleBuffer = [output copyNextSampleBuffer];
        if (sampleBuffer)
        { // I'm guessing this is the expensive part that we can skip if we want to skip frames
            CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

            // Lock the image buffer
            CVPixelBufferLockBaseAddress(imageBuffer,0); 

            // Get information of the image
            uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
            size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
            size_t width = CVPixelBufferGetWidth(imageBuffer);
            size_t height = CVPixelBufferGetHeight(imageBuffer); 

            // do my pixel analysis

            // Unlock the image buffer
            CVPixelBufferUnlockBaseAddress(imageBuffer,0);
            CFRelease(sampleBuffer);


            [self readNextMovieFrame];
        }
        else
        {
            NSLog(@"could not copy next sample buffer. status is %d", _movieReader.status);

            NSTimeInterval scanDuration = -[startTime timeIntervalSinceNow];

            float scanMultiplier = videoDuration / scanDuration;

            NSString* info = [NSString stringWithFormat:@"Done\n\nvideo duration: %f seconds\nscan duration: %f seconds\nmultiplier: %f", videoDuration, scanDuration, scanMultiplier];

            [self performSelectorOnMainThread:@selector(updateInfo:) withObject:info waitUntilDone:YES];
        }


    }
    else
    {
        NSLog(@"status is now %d", _movieReader.status);


    }

}


- (void) updateInfo: (id*)message
{
    NSString* info = [NSString stringWithFormat:@"%@", message];

    [infoTextView setText:info];
}

1 个答案:

答案 0 :(得分:1)

如果您想要不太准确的帧处理(不是逐帧),则应使用AVAssetImageGenerator

此类返回您询问的指定时间的帧。

具体来说,构建一个阵列,填充时间在剪辑的持续时间之间,每次间隔0.5秒(iPhone影片大约29.3 fps,如果你想每隔15帧一次,每30秒一帧)并让图像生成器返回你的帧。

对于每个帧,您可以看到您请求的时间和帧的实际时间。它的默认值是您提出的时间约0.5s容差,但您也可以通过更改属性来更改它:

requestedTimeToleranceBeforerequestedTimeToleranceAfter

我希望我回答你的问题, 祝你好运。