将更改的视频保存到Swift 3中的相机胶卷

时间:2017-04-15 01:01:32

标签: video swift3 xcode8 ios10

为了在Swift 3中保存更改的视频,我使用什么功能?我们使用以下函数来捕获图像的整个屏幕,例如:

UIGraphicsBeginImageContext(self.view.frame.size)
        if let ctx = UIGraphicsGetCurrentContext() {

            self.view.layer.render(in: ctx)
            let renderedImage = UIGraphicsGetImageFromCurrentImageContext()
            UIGraphicsEndImageContext()

但是我们使用什么来保存带有一些绘图的视频呢?

谢谢!附上GIF示例。

1 个答案:

答案 0 :(得分:0)

这是一项相当复杂的任务。

我不希望这个答案比现在更大,所以我会假设一些大事:

  • 您已经拥有没有绘图的视频文件
  • 您已有图纸

对于编辑视频,iOS使用AVFoundation框架,因此您需要将其导入到您的班级。 编辑功能应如下所示:

//Input are video (AVAsset) and image that you already have
func addOverlayTo(asset: AVAsset, overlayImage:UIImage?) {
    //this object will be our new video. It describes what will be in it
    let mixComposition = AVMutableComposition()
    //we tell our composition that there will be video track in it
    let videoTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
    //we add our video file to that track
    try! videoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, asset.duration),
                                        of: asset.tracks(withMediaType: AVMediaTypeVideo)[0] ,
                                        at: kCMTimeZero)
    //this object tells how to display our video
    let mainCompositionInst = AVMutableVideoComposition()
    //in iOS videos are always stored in landscape right orientation
    //so to orient and size everything properly we have to look at transform property of asset
    let size = determineRenderSize(for: asset)
    //these steps are necessary only if our video has multiple layers
    if overlayImage != nil {
        //create all necessary layers
        let videoLayer = CALayer()
        videoLayer.frame = CGRect(origin: CGPoint(x: 0, y: 0), size: size)
        let parentLayer = CALayer()
        parentLayer.frame = CGRect(origin: CGPoint(x: 0, y: 0), size: size)
        parentLayer.addSublayer(videoLayer)
        if overlayImage != nil{
            let overlayLayer = CALayer()
            overlayLayer.contents = overlayImage?.cgImage
            overlayLayer.frame = CGRect(origin: CGPoint(x: 0, y: 0), size: size)
            parentLayer.addSublayer(overlayLayer)
        }
        //layout layers properly
        mainCompositionInst.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)
    }
     let mainInstruction = AVMutableVideoCompositionInstruction()
     //this object will rotate our video to proper orientation
     let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
     layerInstruction.setTransform(videoTrack.preferredTransform, at: kCMTimeZero)
     mainInstruction.layerInstructions = [layerInstruction]
     mainCompositionInst.instructions = [mainInstruction]
     //now we have to fill all properties of our composition instruction
     //their names are quite informative so I won't comment much
     mainCompositionInst.renderSize = size
     mainCompositionInst.renderScale = 1.0
     //assumed standard 30 fps. It's written as 20/600 because videos
     //from built in phone camera have default time scale 600
     mainCompositionInst.frameDuration = CMTimeMake(20,600)
     mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration)
     //now we need to save our new video to phone memory
     //object that will do it
     let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)!
     //create a path where our video will be saved   
     let documentDirectory = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
     let outputPath = documentDirectory + "your_file_name.mp4"
     //if there already is a file with this path export will fail
     if FileManager.default.fileExists(atPath: outputPath) {
         try! FileManager.default.removeItem(atPath: outputPath)
     }   
     exporter.outputURL = URL.init(fileURLWithPath: outputPath)
     //again a bunch of parameters that have to be filled. These a pretty standard though
     exporter.outputFileType = AVFileTypeQuickTimeMovie
     exporter.shouldOptimizeForNetworkUse = true
     exporter.videoComposition = mainCompositionInst
     exporter.timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration)
     exporter.exportAsynchronously { () -> Void in
         if exporter.error == nil && exporter.status == .completed{
             print("SAVED!")
         }
         else{
             print(exporter.error!
         }
     }

确定方向的功能:

func determineRenderSize(for asset: AVAsset) -> CGSize {
    let videoTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0]
    let size = videoTrack.naturalSize
    let txf = videoTrack.preferredTransform
    print("transform is ", txf)
    if (size.height == txf.tx && txf.ty == 0){
        return CGSize(width: size.height, height: size.width) //portrait
    }
    else if (txf.tx == size.width && txf.ty == size.height){
        return size //landscape left
    }
    else if (txf.tx == 0 && txf.ty == size.width){
        return CGSize(width: size.height, height: size.width) //upside down
    }
    else{
        return size //landscape right
    }
}

这里有很多不同的参数,但解释一下会占用太多空间,所以为了更多地了解它们,我建议你阅读一些关于iOS视频编辑的教程。 一些好的是https://www.raywenderlich.com/13418/how-to-play-record-edit-videos-in-ioshttps://www.raywenderlich.com/30200/avfoundation-tutorial-adding-overlays-and-animations-to-videos