目标是在视频上叠加图像,但使用AVVideoCompositionCoreAnimationTool会对图像进行像素化处理。
图像尺寸为640x1136。视频导出尺寸为320x568(模拟5S设备),因此图像应该可以很好地缩小。图像本身很清晰,但在导出过程中会出现像素化现象。
使用renderMcale for AVMutableVideoComposition没有帮助,因为如果值为1.0,则AVAssetExportSession会抛出异常。
为持有图像的图层设置contentsGravity似乎没有任何效果。
目标是让用户录制视频,然后在视频上绘制。 (图像代表用户绘图。)最终,导出的视频应与用户在视频预览中看到的内容以及用户绘制的内容相匹配,具有相同的质量和尺寸。这个问题有助于叠加图像像素化。
帮助?
// Create main composition & its tracks
let mainComposition = AVMutableComposition()
let compositionVideoTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
let compositionAudioTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
// Get source video & audio tracks
let videoURL = NSURL(fileURLWithPath: videoURL)
let videoAsset = AVURLAsset(URL: videoURL, options: nil)
let sourceVideoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]
let sourceAudioTrack = videoAsset.tracksWithMediaType(AVMediaTypeAudio)[0]
// Add source tracks to composition
do {
try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: sourceVideoTrack, atTime: kCMTimeZero)
try compositionAudioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: sourceAudioTrack, atTime: kCMTimeZero)
} catch {
print("Error with insertTimeRange while exporting video: \(error)")
}
// Create video composition
let videoComposition = AVMutableVideoComposition()
print("Video composition duration: \(CMTimeGetSeconds(mainComposition.duration))")
// -- Set parent layer & set size equal to device bounds
let parentLayer = CALayer()
parentLayer.frame = CGRectMake(0, 0, view.bounds.width, view.bounds.height)
parentLayer.backgroundColor = UIColor.redColor().CGColor
parentLayer.contentsGravity = kCAGravityResizeAspectFill
// -- Set composition equal to capture settings
videoComposition.renderSize = CGSize(width: view.bounds.width, height: view.bounds.height)
videoComposition.frameDuration = CMTimeMake(1, Int32(frameRate))
// -- Add instruction to video composition object
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, compositionVideoTrack.asset!.duration)
let videoLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: compositionVideoTrack)
instruction.layerInstructions = [videoLayerInstruction]
videoComposition.instructions = [instruction]
// -- Create video layer
let videoLayer = CALayer()
videoLayer.frame = parentLayer.frame
videoLayer.contentsGravity = kCAGravityResizeAspectFill
// -- Create overlay layer
let overlayLayer = CALayer()
overlayLayer.frame = parentLayer.frame
overlayLayer.contentsGravity = kCAGravityResizeAspectFill
overlayLayer.contents = overlayImage!.CGImage
overlayLayer.contentsScale = overlayImage!.scale
// -- Add sublayers to parent layer
parentLayer.addSublayer(videoLayer)
parentLayer.addSublayer(overlayLayer)
//overlayLayer.shouldRasterize = true
// -- Set animation tool
videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, inLayer: parentLayer)
// Create exporter
let outputURL = getFilePath(getUniqueFilename(gMP4File))
let exporter = AVAssetExportSession(asset: mainComposition, presetName: AVAssetExportPresetHighestQuality)!
exporter.outputURL = NSURL(fileURLWithPath: outputURL)
exporter.outputFileType = AVFileTypeMPEG4
exporter.videoComposition = videoComposition
exporter.shouldOptimizeForNetworkUse = true
答案 0 :(得分:2)
在使用rasterizationScale和contentsScale进行多次测试后,设置组合两者的帮助最大,但线条仍然不如原始线条那么清晰。
希望有人在与视频合并时找到如何保持原始图像清晰度的答案。
请注意,如果使用rasterizationScale,您可能还需要使用shouldRasterize。
这些测试在设备规模(例如,2.0和5S)和2x设备规模(例如,4.0和5S)下进行。看到2x设备规模在其他地方使用,所以决定尝试它,即使它的影响尚不清楚。contentsScale 2.0:直线很清晰,但圆圈包含文物。
contentsScale 4.0:直线可以,但不像2.0那样清晰,但圆圈含有较少的工件。总的来说,这是一个更好的形象。
rasterizationScale 2.0:直线褶皱但圆形区域(例如,字母“R”中)很可怕
rasterizationScale 4.0:直线不是那么尖锐但是圆润的区域更好
rasterizationScale + contentsScale 2.0:最佳折衷方案,线条仍然不如原始图像那么清晰