尺寸和裁剪视频以校正CALayer尺寸

时间:2020-05-20 19:39:22

标签: swift cgaffinetransform avmutablecomposition avvideocomposition

我正在尝试使用视频,大于视频的图像叠加层以及其他两个图像(作为标题)来创建视频合成。为了简单起见,我已经为所有重要的图层上色。两个字幕图像分别为绿色和蓝色,视频中parentLayer的背景为橙色,视频的背景为红色,中间为黄色正方形(正方形,因此可以监视宽高比)。我搜索过高和低,无法找到合适的解决方案来用正确分辨率的视频填充videoLayer的黑色部分。我找到的最佳解决方案是在AVFoundation: Fit Video to CALayer correctly when exporting

中发布的答案

下面是我从上面链接的解决方案中改编的代码。我还提供了屏幕截图,以显示参数更改带来的不同结果。 videoLayer以外的所有图像和图层都已缩放并正确放置在导出的视频文件中。

代码:


let widthRatio : CGFloat = 0.84583 
let horizontalOffset : CGFloat = 0.07667
let titleHeightOffset : CGFloat = 0.00937
let textHeightRatio : CGFloat = 0.13111
let thumbTitleHeightOffset : CGFloat = 0.15817
let thumbTitleHeightRatio : CGFloat = 0.68574
let captionTitleHeightOffset : CGFloat = 0.85952

func overlayVideo(titleImage: UIImage, captionImage: UIImage, videoURL: URL) {

        let composition = AVMutableComposition()
        let videoAsset = AVURLAsset(url: videoURL)        
        let videoAssetTrack = videoAsset.tracks(withMediaType: AVMediaType.video)
        let videoTrack : AVAssetTrack = videoAssetTrack[0]
        let video_timeRange = CMTimeRangeMake(start: CMTime.zero, duration: videoAsset.duration)
        let compositionVideoTrack : AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: CMPersistentTrackID())!

        do {
            try compositionVideoTrack.insertTimeRange(video_timeRange, of: videoTrack, at: CMTime.zero)
        } catch {}

        compositionVideoTrack.preferredTransform = videoTrack.preferredTransform

        let filmImage : UIImage = UIImage(named: "FilmTitle")!
        let size = filmImage.size
        let titleImageLayer = CALayer()
        titleImageLayer.backgroundColor = UIColor.blue.cgColor
        titleImageLayer.frame = CGRect(x: size.width * horizontalOffset, y: size.height * titleHeightOffset, width: size.width * widthRatio, height: size.height * textHeightRatio)
        titleImageLayer.opacity = 1.0

        let captionImageLayer = CALayer()
        captionImageLayer.backgroundColor = UIColor.green.cgColor
        captionImageLayer.frame = CGRect(x: size.width * horizontalOffset, y: size.height * captionTitleHeightOffset, width: size.width * widthRatio, height: size.height * textHeightRatio)
        captionImageLayer.opacity = 1.0

        let videoLayer = CALayer()
        videoLayer.frame = CGRect(x: size.width * horizontalOffset, y: size.height * thumbTitleHeightOffset, width: size.width * widthRatio, height: size.height * thumbTitleHeightRatio)
        videoLayer.backgroundColor = UIColor.red.cgColor
        videoLayer.opacity = 1.0

        let parentLayer = CALayer()
        parentLayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
        parentLayer.addSublayer(videoLayer)
        parentLayer.addSublayer(titleImageLayer)
        parentLayer.addSublayer(captionImageLayer)
        parentLayer.backgroundColor = UIColor.orange.cgColor

        let layerComposition = AVMutableVideoComposition()
                    layerComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
        layerComposition.renderSize = size


         // instruction for video
                let instruction = AVMutableVideoCompositionInstruction()
                    instruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: composition.duration)
                let videotrack = composition.tracks(withMediaType: AVMediaType.video)[0] as AVAssetTrack

            //MAIN SOLUTION CODE FROM STACKOVERFLOW
let bugFixTransform = CGAffineTransform(scaleX: videoLayer.frame.width / videotrack.naturalSize.width, y: videoLayer.frame.height / videotrack.naturalSize.height)

let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videotrack)
layerinstruction.setTransform(bugFixTransform, at: .zero)
                    instruction.layerInstructions = NSArray(object: layerinstruction) as! [AVVideoCompositionLayerInstruction]
                    layerComposition.instructions = NSArray(object: instruction) as! [AVVideoCompositionInstruction]

        layerComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)

        print("Layer composition resolution = \(layerComposition.renderSize) \(layerComposition.renderSize.height) \(layerComposition.renderSize.width)")

     //MORE CODE FOR EXPORT-NO ISSUES/NOT RELEVANT
    }

如您在屏幕快照中所见,此代码导致视频被拉伸得太宽(黄色框被拉伸)并且无法完全填满videoLayer。理想情况下,我想将视频的大小调整为aspectFill的中心点的图层,并裁剪出视频的边缘(对于该特定视频,尝试使用任何视频方面,但都会很满意只是这个问题。我显然无法正确理解某些内容。)

Resulting Video layout from original code

如果我在scaleX = 1.181818中更改scaleY中的bugFixTransformscaleY/scaleX,则videoLayer中视频的纵横比是正确的(黄色框是正方形),但是我获得该值的方式是反复试验,而不是通过代码计算得出。参见下文:

scaleY/scaleX = 1.181818

从现有参数和对象(而不是组成常量)派生的缩放,定位(中心)和裁剪方面的任何帮助将不胜感激。谢谢!

0 个答案:

没有答案
相关问题