AVExportSession中CALayer中的视频重力

时间:2017-06-10 08:05:26

标签: ios swift ipad video swift3

我的应用首先录制视频,然后添加一些效果后,使用AVExportSession导出输出。

首先,问题是视频录制过程中的视频重力,通过将videoGravity内的AVCaptureVideoPreviewLayer属性更改为AVLayerVideoGravityResizeAspectFill来解决。

其次,问题在于显示录制的视频,该视频是通过将VideoGravity内的AVPlayerLayer属性更改为AVLayerVideoGravityResizeAspectFill

来解决的

但问题是,现在当我想使用AVExportSession添加一些效果后导出视频时,又出现了一些视频重力问题。即使更改CALayer中的contentsGravity属性也不会影响输出。我应该提到这个问题在iPad中很明显。

这是我想在添加一些效果之前显示视频的图像:

因为你可以看到我的手指尖位于屏幕顶部(因为我已经解决了应用程序内层的重力问题)

但在导出并保存到图库后,我看到的是这样的:

我知道问题在于引力,但我不知道如何解决它。当我在导出时记录或更改以下代码时,我不知道我应该对视频做出任何更改:

    let composition = AVMutableComposition()
    let asset = AVURLAsset(url: videoUrl, options: nil)

    let tracks =  asset.tracks(withMediaType : AVMediaTypeVideo)
    let videoTrack:AVAssetTrack = tracks[0] as AVAssetTrack
    let timerange = CMTimeRangeMake(kCMTimeZero, asset.duration)

    let viewSize = parentView.bounds.size
    let trackSize = videoTrack.naturalSize

    let compositionVideoTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())

    do {
        try compositionVideoTrack.insertTimeRange(timerange, of: videoTrack, at: kCMTimeZero)
    } catch {
        print(error)
    }

    let compositionAudioTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())

    for audioTrack in asset.tracks(withMediaType: AVMediaTypeAudio) {
        do {
            try compositionAudioTrack.insertTimeRange(audioTrack.timeRange, of: audioTrack, at: kCMTimeZero)
        } catch {
            print(error)
        }
    }

    let videolayer = CALayer()
    videolayer.frame.size = viewSize
    videolayer.contentsGravity = kCAGravityResizeAspectFill

    let parentlayer = CALayer()
    parentlayer.frame.size = viewSize
    parentlayer.contentsGravity = kCAGravityResizeAspectFill

    parentlayer.addSublayer(videolayer)

    let layercomposition = AVMutableVideoComposition()
    layercomposition.frameDuration = CMTimeMake(1, 30)
    layercomposition.renderSize = viewSize
    layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videolayer, in: parentlayer)

    let instruction = AVMutableVideoCompositionInstruction()

    instruction.timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration)

    let videotrack = composition.tracks(withMediaType: AVMediaTypeVideo)[0] as AVAssetTrack
    let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videotrack)

    let trackTransform = videoTrack.preferredTransform
    let xScale = viewSize.height / trackSize.width
    let yScale = viewSize.width / trackSize.height

    var exportTransform : CGAffineTransform!
    if (getVideoOrientation(transform: videoTrack.preferredTransform).1 == .up) {
        exportTransform = videoTrack.preferredTransform.translatedBy(x: trackTransform.ty * -1 , y: 0).scaledBy(x: xScale , y: yScale)
    } else {
        exportTransform = CGAffineTransform.init(translationX: viewSize.width, y: 0).rotated(by: .pi/2).scaledBy(x: xScale, y: yScale)
    }

    layerinstruction.setTransform(exportTransform, at: kCMTimeZero)

    instruction.layerInstructions = [layerinstruction]
    layercomposition.instructions = [instruction]

    let filePath = FileHelper.getVideoTimeStampName()
    let exportedUrl = URL(fileURLWithPath: filePath)

    guard let assetExport = AVAssetExportSession(asset: composition, presetName:AVAssetExportPresetHighestQuality) else {delegate?.exportFinished(status: .failed, outputUrl: exportedUrl); return}

    assetExport.videoComposition = layercomposition
    assetExport.outputFileType = AVFileTypeMPEG4
    assetExport.outputURL = exportedUrl
    assetExport.exportAsynchronously(completionHandler: {
        switch assetExport.status {
        case .completed:
            print("video exported successfully")
            self.delegate?.exportFinished(status: .completed, outputUrl: exportedUrl)
            break
        case .failed:
            self.delegate?.exportFinished(status: .failed, outputUrl: exportedUrl)
            print("exporting video failed: \(String(describing: assetExport.error))")
            break
        default :
            print("the video export status is \(assetExport.status)")
            self.delegate?.exportFinished(status: assetExport.status, outputUrl: exportedUrl)
            break
        }
    })

如果有人能提供帮助,我将不胜感激。

1 个答案:

答案 0 :(得分:1)

如果您使用AVLayerVideoGravityResizeAspectFill,视频捕获屏幕会自行调整为CALayer。所以真正发生的事情是相机实际捕捉你提供的第二张图像。您可以通过以下步骤解决此问题:

  1. 将图像作为UIImage
  2. 使用与您正在使用的相同CALayer尺寸裁剪图像
  3. 将裁剪后的图像上传到服务器,并将其显示给用户等。
  4. 要裁剪图像,您可以使用:

    extension UIImage {
        func crop(to:CGSize) -> UIImage {
                guard let cgimage = self.cgImage else { return self }
    
                let contextImage: UIImage = UIImage(cgImage: cgimage)
    
                let contextSize: CGSize = contextImage.size
    
                //Set to square
                var posX: CGFloat = 0.0
                var posY: CGFloat = 0.0
                let cropAspect: CGFloat = to.width / to.height
    
                var cropWidth: CGFloat = to.width
                var cropHeight: CGFloat = to.height
    
                if to.width > to.height { //Landscape
                    cropWidth = contextSize.width
                    cropHeight = contextSize.width / cropAspect
                    posY = (contextSize.height - cropHeight) / 2
                } else if to.width < to.height { //Portrait
                    cropHeight = contextSize.height
                    cropWidth = contextSize.height * cropAspect
                    posX = (contextSize.width - cropWidth) / 2
                } else { //Square
                    if contextSize.width >= contextSize.height { //Square on landscape (or square)
                        cropHeight = contextSize.height
                        cropWidth = contextSize.height * cropAspect
                        posX = (contextSize.width - cropWidth) / 2
                    }else{ //Square on portrait
                        cropWidth = contextSize.width
                        cropHeight = contextSize.width / cropAspect
                        posY = (contextSize.height - cropHeight) / 2
                    }
                }
    
                let rect: CGRect = CGRect(x: posX, y: posY, width: cropWidth, height: cropHeight)
                // Create bitmap image from context using the rect
                let imageRef: CGImage = contextImage.cgImage!.cropping(to: rect)!
    
                // Create a new image based on the imageRef and rotate back to the original orientation
                let cropped: UIImage = UIImage(cgImage: imageRef, scale: self.scale, orientation: self.imageOrientation)
    
                UIGraphicsBeginImageContextWithOptions(to, true, self.scale)
                cropped.draw(in: CGRect(x: 0, y: 0, width: to.width, height: to.height))
                let resized = UIGraphicsGetImageFromCurrentImageContext()
                UIGraphicsEndImageContext()
    
                return resized!
            }
    }