我正在尝试一些视频编辑,我将视频/音频排序和混合在一起,所有工作正常,甚至一些基本的慢动作! :) 现在我想要整合视频过滤器,不仅要集成到图层本身(否则我会在CIFilter的公司中使用AVPlayerItemVideoOutput),而且还要在导出的最终视频文件中集成视频过滤器。 因此,我目前正在研究将上面提到的CIFilter“渲染”到最终视频中,同时仍然使用CMTime对时序进行非常精确的控制。
有什么建议吗?
答案 0 :(得分:3)
您可以使用AVVideoCompositing和AVAsynchronousVideoCompositionRequest协议来实现自定义合成器。
CVPixelBufferRef pixelBuffer = [AVAsynchronousVideoCompositionRequest sourceFrameByTrackID:trackID];
CIImage *theImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIImage *motionBlurredImage = [[CIFilter *filterWithName:@"CIMotionBlur" keysAndValues:@"inputImage", theImage, nil] valueForKey:kCIOutputImageKey];
CIContext *someCIContext = [CIContext contextWithEAGLContext:eaglContext];
[someCIContext render:motionBlurredImage toCVPixelBuffer:outputBuffer];
然后使用OpenGL渲染像素缓冲区,如Apple's Documentation中所述。这将允许您实现所需的任意数量的转换或过滤器。
答案 1 :(得分:0)
2015年的WWDC演讲解释了如何做到这一点。
从20:32 https://developer.apple.com/videos/play/wwdc2015/510/
开始观看导出:
步骤01:
let vidComp = AVVideoComposition(asset: avAsset,
applyingCIFiltersWithHandler: {
request in
filtered = request.sourceImage.imageByClampingToExtent();
filtered = filtered.imageByApplyingFilter("CIGaussianBlur",
withInputParameters: [kCIInputRadiusKey: 100])
filtered = filtered.imageByCroppingToRect(request.sourceImage.extent)
request.finishWithImage(filtered, context: cicontext)
})
步骤02:
let export = AVAssetExportSession(asset: avAsset,
presetName: AVAssetExportPreset1920x1080)
export.outputFileType = AVFileTypeQuickTimeMovie
export.outputURL = outURL
export.videoComposition = vidComp
NSFileManager.defaultManager().removeItemAtURL(outURL)
export.exportAsynchronouslyWithCompletionHandler()
播放:
步骤01:
let vidComp = AVVideoComposition(asset: avAsset,
applyingCIFiltersWithHandler: {
// same as earlier example
})
步骤02:
let playerItem = AVPlayerItem(asset: avAsset)
playerItem.videoComposition = vidComp
let player = AVPlayer(playerItem: playerItem)
player.play()
乔纳森的答案也是正确的。但是,Apple现在已停止使用OpenGL。下面是使用Metal的Swift中的相同代码,
let theImage = CIImage.init(cvImageBuffer: foregroundPixelBuffer)
let blurFilter = CIFilter.init(name: "CIMotionBlur")
blurFilter?.setValue(theImage, forKey: "inputImage")
if let destinationImage = blurFilter?.outputImage {
context?.render(destinationImage, to: outputBuffer)
}
上下文应声明如下
context = CIContext.init(mtlDevice: device)
和以下设备
// Ask for the default Metal device; this represents our GPU.
guard let defaultMetalDevice = MTLCreateSystemDefaultDevice() else {
print("Metal is not supported on this device.")
return nil
}
device = defaultMetalDevice
上下文和设备实例应声明一次,然后重新使用以受益于其缓存功能。