我才刚刚开始学习AVFoundation框架。导出视频时,我会在导出视频之前将视频合成渲染尺寸缩小到屏幕尺寸(UIScreen.main.bounds),因为它的自然尺寸(我认为是不必要的)会很大。 (至少,缩放比例因子也在Ray Wenderlich教程中完成-https://www.raywenderlich.com/5135-how-to-play-record-and-merge-videos-in-ios-and-swift,请参见下面的代码)。但是,我发现这种缩放实际上会降低导出视频的质量(与渲染大小=自然视频大小相比)。为什么会这样呢?我知道将视频扩大到更大的位置会导致质量变差,但是如果我们只是将视频缩小到较小的尺寸(屏幕尺寸),为什么质量会变差?
是否有一种方法可以保持清晰的质量,而不必将渲染尺寸保持在其自然尺寸?
static func videoCompositionInstruction(_ track: AVCompositionTrack, asset: AVAsset)
-> AVMutableVideoCompositionLayerInstruction {
let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track)
let assetTrack = asset.tracks(withMediaType: .video)[0]
let transform = assetTrack.preferredTransform
let assetInfo = orientationFromTransform(transform)
var scaleToFitRatio = UIScreen.main.bounds.width / assetTrack.naturalSize.width
if assetInfo.isPortrait {
scaleToFitRatio = UIScreen.main.bounds.width / assetTrack.naturalSize.height
let scaleFactor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio)
instruction.setTransform(assetTrack.preferredTransform.concatenating(scaleFactor), at: kCMTimeZero)
} else {
let scaleFactor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio)
var concat = assetTrack.preferredTransform.concatenating(scaleFactor)
.concatenating(CGAffineTransform(translationX: 0, y: UIScreen.main.bounds.width / 2))
if assetInfo.orientation == .down {
let fixUpsideDown = CGAffineTransform(rotationAngle: CGFloat(Double.pi))
let windowBounds = UIScreen.main.bounds
let yFix = assetTrack.naturalSize.height + windowBounds.height
let centerFix = CGAffineTransform(translationX: assetTrack.naturalSize.width, y: yFix)
concat = fixUpsideDown.concatenating(centerFix).concatenating(scaleFactor)
}
instruction.setTransform(concat, at: kCMTimeZero)
}
return instruction
}