我正在尝试从UIImage
捕获的视频帧中生成GPUImage
。我已经完成了很多AVFoundation
视频工作,但我不习惯使用GPUImage
。我subclassed
GPUImageVideoCamera
并添加了此方法,但UIImage
始终为零。如果有人能告诉我我哪里走得太可怕了,我会非常感激!
- (void)processVideoSampleBuffer:( CMSampleBufferRef )sampleBuffer
{ [super processVideoSampleBuffer:sampleBuffer]; //让GPUImage先进行处理
if (!self.thumbnailGenerated)
{
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
NSLog(@"%f", (float)timestamp.value / timestamp.timescale);
self.thumbnailGenerated = YES;
dispatch_sync(dispatch_get_main_queue(), ^
{
// generate a preview frame from the last filter in the camera filter chain
UIImage *thumbnailImage = [UIImage imageWithCGImage:[[self.targets lastObject] newCGImageFromCurrentlyProcessedOutput]];
NSString *pathToMovie = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/Thumbnail.png"];
[UIImagePNGRepresentation(thumbnailImage) writeToFile:pathToMovie atomically:YES];
});
}
}
答案 0 :(得分:0)
我已使用此代码生成第一帧的CGImageRef,用于缩略图
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:videoURl options:nil];
AVAssetImageGenerator *imageGenerator = [AVAssetImageGenerator assetImageGeneratorWithAsset:asset];
[imageGenerator setAppliesPreferredTrackTransform:YES];
NSData *videoData = [NSData dataWithContentsOfURL:asset.URL];
CGImageRef image = [imageGenerator copyCGImageAtTime:kCMTimeZero actualTime:nil error:&error];
您可以将kCMTimeZero
替换为某个实际值,以获得您想要的框架。
之后,您必须将CGImageRef
转换为UIImage
。
答案 1 :(得分:0)
我不确定这是否有任何帮助,但我在处理视频时获取缩略图。为此,我使用
videoInput --> someMyOperations --> fileOutput
someMyOperations --> imageOutput //imageOutput is PictureOutput()
videoInput.start() //that needs to be called!
imageOutput.saveNextFrameToUrl(coverUrl, format: .jpg) { file in
//here goes the code what to do with thumbnail
videoInput.cancel() //quite probably here you want this
}
那个猜测 - 我没有看到你的代码,但对我来说这很有效。