TLDR - SEE EDIT
我正在Swift中创建一个测试应用,我希望使用AVMutableComposition
从我的应用文档目录中拼接多个视频。
我在某种程度上成功完成了这项工作,我的所有视频都拼接在一起,所有内容都展示了正确尺寸的肖像和风景。
但我的问题是所有视频都在编辑中的最后一个视频的方向上显示。
我知道要解决这个问题,我需要为我添加的每个曲目添加图层说明,但是我似乎无法做到这一点,我发现整个编辑的答案似乎是纵向的只需缩放以适应纵向视图的风景视频,所以当我将手机侧面转动以观看风景视频时,它们仍然很小,因为它们已经缩放到纵向尺寸。
这不是我想要的结果,我想要预期的功能,即如果视频是风景,它在纵向模式下显示缩放但如果手机旋转我希望该风景视频填满屏幕(因为它会当简单地观看照片中的风景视频时,以及相同的肖像,以便在纵向观看时它是全屏的,当侧向转动时,视频被缩放到横向尺寸(就像在照片中观看肖像视频时那样)。 p>
总之,我想要的结果是,当查看具有横向和纵向视频的编辑时,我可以使用我的手机查看整个编辑,并且横向视频是全屏的,纵向是缩放的,或者在查看时纵向相同的视频,肖像视频全屏,风景视频按比例缩放。
通过所有答案,我发现情况并非如此,当从照片导入视频以添加到编辑时,它们似乎都有非常意外的行为,并且在添加前面拍摄的视频时具有相同的随机行为相机(要清楚我目前的实施视频从图书馆导入的视频和“自拍”视频以正确的尺寸显示,没有这些问题)。
我正在寻找一种旋转/缩放这些视频的方法,以便它们始终以正确的方向和比例显示,具体取决于用户握住手机的方向。
编辑:我现在知道我不能在一个视频中同时拥有横向和纵向方向,所以我正在寻找的预期结果是在景观中制作最终视频取向。我已经想出如何切换所有的方向和比例,以获得一切相同的方式,但我的输出是一个肖像视频,如果有人可以帮助我改变这,所以我的输出是景观,将不胜感激。
以下是我获取每个视频说明的功能:
func videoTransformForTrack(asset: AVAsset) -> CGAffineTransform
{
var return_value:CGAffineTransform?
let assetTrack = asset.tracksWithMediaType(AVMediaTypeVideo)[0]
let transform = assetTrack.preferredTransform
let assetInfo = orientationFromTransform(transform)
var scaleToFitRatio = UIScreen.mainScreen().bounds.width / assetTrack.naturalSize.width
if assetInfo.isPortrait
{
scaleToFitRatio = UIScreen.mainScreen().bounds.width / assetTrack.naturalSize.height
let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio, scaleToFitRatio)
return_value = CGAffineTransformConcat(assetTrack.preferredTransform, scaleFactor)
}
else
{
let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio, scaleToFitRatio)
var concat = CGAffineTransformConcat(CGAffineTransformConcat(assetTrack.preferredTransform, scaleFactor), CGAffineTransformMakeTranslation(0, UIScreen.mainScreen().bounds.width / 2))
if assetInfo.orientation == .Down
{
let fixUpsideDown = CGAffineTransformMakeRotation(CGFloat(M_PI))
let windowBounds = UIScreen.mainScreen().bounds
let yFix = assetTrack.naturalSize.height + windowBounds.height
let centerFix = CGAffineTransformMakeTranslation(assetTrack.naturalSize.width, yFix)
concat = CGAffineTransformConcat(CGAffineTransformConcat(fixUpsideDown, centerFix), scaleFactor)
}
return_value = concat
}
return return_value!
}
出口商:
// Create AVMutableComposition to contain all AVMutableComposition tracks
let mix_composition = AVMutableComposition()
var total_time = kCMTimeZero
// Loop over videos and create tracks, keep incrementing total duration
let video_track = mix_composition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
var instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: video_track)
for video in videos
{
let shortened_duration = CMTimeSubtract(video.duration, CMTimeMake(1,10));
let videoAssetTrack = video.tracksWithMediaType(AVMediaTypeVideo)[0]
do
{
try video_track.insertTimeRange(CMTimeRangeMake(kCMTimeZero, shortened_duration),
ofTrack: videoAssetTrack ,
atTime: total_time)
video_track.preferredTransform = videoAssetTrack.preferredTransform
}
catch _
{
}
instruction.setTransform(videoTransformForTrack(video), atTime: total_time)
// Add video duration to total time
total_time = CMTimeAdd(total_time, shortened_duration)
}
// Create main instrcution for video composition
let main_instruction = AVMutableVideoCompositionInstruction()
main_instruction.timeRange = CMTimeRangeMake(kCMTimeZero, total_time)
main_instruction.layerInstructions = [instruction]
main_composition.instructions = [main_instruction]
main_composition.frameDuration = CMTimeMake(1, 30)
main_composition.renderSize = CGSize(width: UIScreen.mainScreen().bounds.width, height: UIScreen.mainScreen().bounds.height)
let exporter = AVAssetExportSession(asset: mix_composition, presetName: AVAssetExportPreset640x480)
exporter!.outputURL = final_url
exporter!.outputFileType = AVFileTypeMPEG4
exporter!.shouldOptimizeForNetworkUse = true
exporter!.videoComposition = main_composition
// 6 - Perform the Export
exporter!.exportAsynchronouslyWithCompletionHandler()
{
// Assign return values based on success of export
dispatch_async(dispatch_get_main_queue(), { () -> Void in
self.exportDidFinish(exporter!)
})
}
很抱歉,我只是想确保我对所要求的内容非常清楚,因为其他答案对我没用。
答案 0 :(得分:1)
我不确定你的orientationFromTransform()
是否给你正确的方向。
我认为您尝试修改它或尝试类似:
extension AVAsset {
func videoOrientation() -> (orientation: UIInterfaceOrientation, device: AVCaptureDevicePosition) {
var orientation: UIInterfaceOrientation = .Unknown
var device: AVCaptureDevicePosition = .Unspecified
let tracks :[AVAssetTrack] = self.tracksWithMediaType(AVMediaTypeVideo)
if let videoTrack = tracks.first {
let t = videoTrack.preferredTransform
if (t.a == 0 && t.b == 1.0 && t.d == 0) {
orientation = .Portrait
if t.c == 1.0 {
device = .Front
} else if t.c == -1.0 {
device = .Back
}
}
else if (t.a == 0 && t.b == -1.0 && t.d == 0) {
orientation = .PortraitUpsideDown
if t.c == -1.0 {
device = .Front
} else if t.c == 1.0 {
device = .Back
}
}
else if (t.a == 1.0 && t.b == 0 && t.c == 0) {
orientation = .LandscapeRight
if t.d == -1.0 {
device = .Front
} else if t.d == 1.0 {
device = .Back
}
}
else if (t.a == -1.0 && t.b == 0 && t.c == 0) {
orientation = .LandscapeLeft
if t.d == 1.0 {
device = .Front
} else if t.d == -1.0 {
device = .Back
}
}
}
return (orientation, device)
}
}