我正在尝试浏览AVAsset
中的每个帧并处理每个帧,就好像它是一个图像一样。我无法从搜索中找到任何内容。
我想要完成的任务在伪代码
中看起来像这样for each frame in asset
take the frame as an image and convert to a cvMat
Process and store data of center points
Store center points in array
我不知道如何编写的伪代码中唯一的部分是通过每个帧并在图像中捕获它。
有人可以帮忙吗?
答案 0 :(得分:4)
一个答案是使用AVAssetImageGenerator。
1)将电影文件加载到AVAsset
对象中
2)创建一个AVAssetImageGenerator
对象
3)传递您想要从电影中取回图像的帧的估计时间。
将requestedTimeToleranceBefore
对象上的2个属性requestedTimeToleranceAfter
和AVAssetImageGenerator
设置为kCMTimeZero
将提高获取单个帧的能力,但会增加处理时间。
然而,这种方法很慢,我找不到更快的方法。
//Load the Movie from a URL
self.movieAsset = [AVAsset assetWithURL:self.movieURL];
NSArray *movieTracks = [self.movieAsset tracksWithMediaType:AVMediaTypeVideo];
AVAssetTrack *movieTrack = [movieTracks objectAtIndex:0];
//Make the image Generator
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc] initWithAsset:self.movieAsset];
//Create a variables for the time estimation
Float64 durationSeconds = CMTimeGetSeconds(self.movieAsset.duration);
Float64 timePerFrame = 1.0 / (Float64)movieTrack.nominalFrameRate;
Float64 totalFrames = durationSeconds * movieTrack.nominalFrameRate;
//Step through the frames
for (int counter = 0; counter <= totalFrames; counter++){
CMTime actualTime;
Float64 secondsIn = ((float)counter/totalFrames)*durationSeconds;
CMTime imageTimeEstimate = CMTimeMakeWithSeconds(secondsIn, 600);
NSError *error;
CGImageRef image = [imageGenerator copyCGImageAtTime:imageTimeEstimate actualTime:&actualTime error:&error];
...Do some processing on the image
CGImageRelease(image);
}
答案 1 :(得分:0)
您可以简单地使用AVAssetReaderTrackOutput生成每个帧:
let asset = AVAsset(url: inputUrl)
let reader = try! AVAssetReader(asset: asset)
let videoTrack = asset.tracks(withMediaType: .video).first!
let outputSettings = [String(kCVPixelBufferPixelFormatTypeKey): NSNumber(value: kCVPixelFormatType_32BGRA)]
let trackReaderOutput = AVAssetReaderTrackOutput(track: videoTrack,
outputSettings: outputSettings)
reader.add(trackReaderOutput)
reader.startReading()
while let sampleBuffer = trackReaderOutput.copyNextSampleBuffer() {
if let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
// do what you want
}
}