在视频中添加水印非常慢

时间:2018-03-08 17:51:11

标签: ios swift avfoundation avcomposition avvideocomposition

我正在使用AVComposition为视频渲染水印。此过程大约需要15秒,这对于20秒的视频来说似乎没有问题。 我的导出设置是:

let exporter = AVAssetExportSession.init(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)
    exporter?.outputURL = outputPath
    exporter?.outputFileType = AVFileType.mp4
    exporter?.shouldOptimizeForNetworkUse = true
    exporter?.videoComposition = mainCompositionInst
    DispatchQueue.main.async {
        exporter?.exportAsynchronously(completionHandler: {

            if exporter?.status == AVAssetExportSessionStatus.completed {
                completion(true, exporter)
            }else{
                completion(false, exporter)
            }

        })
    }

这就是我添加水印的方式:

    //Creating image layer
    let overlayLayer = CALayer()
    let overlayImage: UIImage = image
    overlayLayer.contents = overlayImage.cgImage
    overlayLayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
    overlayLayer.contentsGravity = kCAGravityResizeAspectFill
    overlayLayer.masksToBounds = true

    //Creating parent and video layer
    let parentLayer = CALayer()
    let videoLayer = CALayer()
    parentLayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
    videoLayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
    parentLayer.addSublayer(videoLayer)
    parentLayer.addSublayer(overlayLayer)

    //Adding those layers to video
    composition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)
}

这就是我最终改变视频的方式:

 let videoLayerInstruction = AVMutableVideoCompositionLayerInstruction.init(assetTrack: videoTrack!)
    let videoAssetTrack = videoAsset.tracks(withMediaType: AVMediaType.video)[0]
    var videoAssetOrientation = UIImageOrientation.up
    var isVideoAssetPortrait = false
    let videoTransform = videoAssetTrack.preferredTransform

    if videoTransform.a == 0 && videoTransform.b == 1.0 && videoTransform.c == -1.0 && videoTransform.d == 0 {
        videoAssetOrientation = .right
        isVideoAssetPortrait = true
    }
    if videoTransform.a == 0 && videoTransform.b == -1.0 && videoTransform.c == 1.0 && videoTransform.d == 0 {
        videoAssetOrientation = .left
        isVideoAssetPortrait = true
    }
    if videoTransform.a == 1.0 && videoTransform.b == 0 && videoTransform.c == 0 && videoTransform.d == 1.0 {
        videoAssetOrientation = .up
    }
    if videoTransform.a == -1.0 && videoTransform.b == 0 && videoTransform.c == 0 && videoTransform.d == -1.0 {
        videoAssetOrientation = .down
    }

    videoLayerInstruction.setTransform(videoAssetTrack.preferredTransform, at: kCMTimeZero)

    //Add instructions
    mainInstruction.layerInstructions = [videoLayerInstruction]
    let mainCompositionInst = AVMutableVideoComposition()
    let naturalSize : CGSize!
    if isVideoAssetPortrait {
        naturalSize = CGSize(width: videoAssetTrack.naturalSize.height, height: videoAssetTrack.naturalSize.width)
    } else {
        naturalSize = videoAssetTrack.naturalSize
    }

所以现在我的问题是,如何提高将水印合并到我的视频的性能?对于任何类型的最终用户来说,15秒是完全不可接受的。此外,我需要通过互联网传输这个视频,因此加载屏幕将显示其美丽超过大约20秒。

1 个答案:

答案 0 :(得分:2)

根据Apple Documentation,尝试使用课程AVAsynchronousCIImageFilteringRequest

  

概述

     

使用init(asset:applyCIFiltersWithHandler :)方法为Core Image过滤创建合成时使用此类。在该方法调用中,您提供了一个由AVFoundation调用的块,因为它处理每个视频帧,并且块的唯一参数是AVAsynchronousCIImageFilteringRequest对象。将该对象用于要过滤的视频帧图像,并允许您将过滤后的图像返回到AVFoundation以进行显示或导出。清单1显示了将过滤器应用于资产的示例。

    let filter = CIFilter(name: "CIGaussianBlur")!
    let composition = AVVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in

    // Clamp to avoid blurring transparent pixels at the image edges
    let source = request.sourceImage.imageByClampingToExtent()
    filter.setValue(source, forKey: kCIInputImageKey)

    // Vary filter parameters based on video timing
    let seconds = CMTimeGetSeconds(request.compositionTime)
    filter.setValue(seconds * 10.0, forKey: kCIInputRadiusKey)

    // Crop the blurred output to the bounds of the original image
    let output = filter.outputImage!.imageByCroppingToRect(request.sourceImage.extent)

    // Provide the filter output to the composition
    request.finishWithImage(output, context: nil)
})

有一个tutorial in Objective C也可能是一个很好的资源。