模拟AVLayerVideoGravityResizeAspectFill:裁剪和居中视频以模仿预览而不会丢失锐度

时间:2016-02-08 02:29:58

标签: ios video avfoundation avassetwriter avassetexportsession

根据此SO post,下面的代码会旋转,居中和裁剪用户实时捕捉的视频。

捕获会话使用AVCaptureSessionPresetHigh作为预设值,预览图层使用AVLayerVideoGravityResizeAspectFill作为视频重力。此预览非常清晰。

然而,导出的视频并不那么尖锐,表面上是因为从5S的后置摄像头的1920x1080分辨率缩放到320x568(导出视频的目标大小)会引起丢弃像素的模糊性?

假设没有办法在没有模糊性的情况下从1920x1080扩展到320x568,问题就变成了:如何模仿预览图层的清晰度?

不知何故Apple正在使用算法将1920x1080视频转换为320x568的清晰预览帧。

有没有办法用AVAssetWriter或AVAssetExportSession模仿这个?

<div>foo</div><div>bar</div><div>baz</div>
//solution 1
<style>
    #div01, #div02, #div03 {
                                float:left;
                                width:2%;
    }   
 </style>
 <div id="div01">foo</div><div id="div02">bar</div><div id="div03">baz</div>

 //solution 2

 <style>
      #div01, #div02, #div03 {
                                  display:inline;
                                  padding-left:5px;
      }   
</style>
<div id="div01">foo</div><div id="div02">bar</div><div id="div03">baz</div>

 /* I think this would help but if you have any other thoughts just let me knw      kk */

使用蒂姆的代码拍摄的320x568视频:

enter image description here

用P&P的代码拍摄的640x1136视频: enter image description here

1 个答案:

答案 0 :(得分:3)

试试这个。在Swift中启动一个新的Single View项目,用这段代码替换ViewController,你应该好好去!

我已经设置了一个与输出大小不同的previewLayer,将其更改为文件顶部。

我添加了一些基本的方向支持。 Landscape Vs的输出尺寸略有不同肖像。您可以在此处指定您喜欢的任何视频尺寸尺寸,它应该可以正常工作。

查看videoSettings词典(第278行),了解输出文件的编解码器和大小。您还可以在此处添加其他设置以处理keyFrameIntervals等以调整输出。

我添加了一张录制图片,以显示其录制时间(点击开始,点击结束),您需要将一些资源添加到名为录制的Assets.xcassets中(或者注释掉它所在的第106行)尝试加载它。)

几乎就是这样。祝好运!

哦,它将视频转储到项目目录中,您需要转到Window / Devices并下载Container以轻松查看视频。在TODO中,您可以将一个部分挂钩并将文件复制到PhotoLibrary(使测试方式更容易)。

import UIKit
import AVFoundation

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate {

let CAPTURE_SIZE_LANDSCAPE: CGSize = CGSizeMake(1280, 720)
let CAPTURE_SIZE_PORTRAIT: CGSize = CGSizeMake(720, 1280)

var recordingImage : UIImageView = UIImageView()

var previewLayer : AVCaptureVideoPreviewLayer?

var audioQueue : dispatch_queue_t?
var videoQueue : dispatch_queue_t?

let captureSession = AVCaptureSession()
var assetWriter : AVAssetWriter?
var assetWriterInputCamera : AVAssetWriterInput?
var assetWriterInputAudio : AVAssetWriterInput?
var outputConnection: AVCaptureConnection?

var captureDeviceBack : AVCaptureDevice?
var captureDeviceFront : AVCaptureDevice?
var captureDeviceMic : AVCaptureDevice?
var sessionSetupDone: Bool = false

var isRecordingStarted = false
//var recordingStartedTime = kCMTimeZero
var videoOutputURL : NSURL?

var captureSize: CGSize = CGSizeMake(1280, 720)
var previewFrame: CGRect = CGRectMake(0, 0, 180, 360)

var captureDeviceTrigger = true
var captureDevice: AVCaptureDevice? {
    get {
        return captureDeviceTrigger ? captureDeviceFront : captureDeviceBack
    }
}

override func supportedInterfaceOrientations() -> UIInterfaceOrientationMask {
    return UIInterfaceOrientationMask.AllButUpsideDown
}

override func shouldAutorotate() -> Bool {
    if isRecordingStarted {
        return false
    }

    if UIDevice.currentDevice().orientation == UIDeviceOrientation.PortraitUpsideDown {
        return false
    }

    if let cameraPreview = self.previewLayer {
        if let connection = cameraPreview.connection {
            if connection.supportsVideoOrientation {
                switch UIDevice.currentDevice().orientation {
                case .LandscapeLeft:
                    connection.videoOrientation = .LandscapeRight
                case .LandscapeRight:
                    connection.videoOrientation = .LandscapeLeft
                case .Portrait:
                    connection.videoOrientation = .Portrait
                case .FaceUp:
                    return false
                case .FaceDown:
                    return false
                default:
                    break
                }
            }
        }
    }

    return true
}

override func viewDidLoad() {
    super.viewDidLoad()

    setupViewControls()

    //self.recordingStartedTime = kCMTimeZero

    // Setup capture session related logic
    videoQueue = dispatch_queue_create("video_write_queue", DISPATCH_QUEUE_SERIAL)
    audioQueue = dispatch_queue_create("audio_write_queue", DISPATCH_QUEUE_SERIAL)

    setupCaptureDevices()
    pre_start()
}

//MARK: UI methods
func setupViewControls() {

    // TODO: I have an image (red circle) in an Assets.xcassets. Replace the following with your own image
    recordingImage.frame = CGRect(x: 0, y: 0, width: 50, height: 50)
    recordingImage.image = UIImage(named: "recording")
    recordingImage.hidden = true
    self.view.addSubview(recordingImage)


    // Setup tap to record and stop
    let tapGesture = UITapGestureRecognizer(target: self, action: "didGetTapped:")
    tapGesture.numberOfTapsRequired = 1
    self.view.addGestureRecognizer(tapGesture)

}



func didGetTapped(selector: UITapGestureRecognizer) {
    if self.isRecordingStarted {
        self.view.gestureRecognizers![0].enabled = false
        recordingImage.hidden = true

        self.stopRecording()
    } else {
        recordingImage.hidden = false
        self.startRecording()
    }

    self.isRecordingStarted = !self.isRecordingStarted
}

func switchCamera(selector: UIButton) {
    self.captureDeviceTrigger = !self.captureDeviceTrigger

    pre_start()
}

//MARK: Video logic
func setupCaptureDevices() {
    let devices = AVCaptureDevice.devices()

    for device in devices {
        if  device.hasMediaType(AVMediaTypeVideo) {
            if device.position == AVCaptureDevicePosition.Front {
                captureDeviceFront = device as? AVCaptureDevice
                NSLog("Video Controller: Setup. Front camera is found")
            }
            if device.position == AVCaptureDevicePosition.Back {
                captureDeviceBack = device as? AVCaptureDevice
                NSLog("Video Controller: Setup. Back camera is found")
            }
        }

        if device.hasMediaType(AVMediaTypeAudio) {
            captureDeviceMic = device as? AVCaptureDevice
            NSLog("Video Controller: Setup. Audio device is found")
        }
    }
}

func alertPermission() {
    let permissionAlert = UIAlertController(title: "No Permission", message: "Please allow access to Camera and Microphone", preferredStyle: UIAlertControllerStyle.Alert)
    permissionAlert.addAction(UIAlertAction(title: "Go to settings", style: .Default, handler: { (action: UIAlertAction!) in
        print("Video Controller: Permission for camera/mic denied. Going to settings")
        UIApplication.sharedApplication().openURL(NSURL(string: UIApplicationOpenSettingsURLString)!)
        print(UIApplicationOpenSettingsURLString)
    }))
    presentViewController(permissionAlert, animated: true, completion: nil)
}

func pre_start() {
    NSLog("Video Controller: pre_start")
    let videoPermission = AVCaptureDevice.authorizationStatusForMediaType(AVMediaTypeVideo)
    let audioPermission = AVCaptureDevice.authorizationStatusForMediaType(AVMediaTypeAudio)
    if  (videoPermission ==  AVAuthorizationStatus.Denied) || (audioPermission ==  AVAuthorizationStatus.Denied) {
        self.alertPermission()
        pre_start()
        return
    }

    if (videoPermission == AVAuthorizationStatus.Authorized) {
        self.start()
        return
    }

    AVCaptureDevice.requestAccessForMediaType(AVMediaTypeVideo, completionHandler: { (granted :Bool) -> Void in
        self.pre_start()
    })
}

func start() {
    NSLog("Video Controller: start")
    if captureSession.running {
        captureSession.beginConfiguration()

        if let currentInput = captureSession.inputs[0] as? AVCaptureInput {
            captureSession.removeInput(currentInput)
        }

        do {
            try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
        } catch {
            print("Video Controller: begin session. Error adding video input device")
        }

        captureSession.commitConfiguration()
        return
    }

    do {
        try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
        try captureSession.addInput(AVCaptureDeviceInput(device: captureDeviceMic))
    } catch {
        print("Video Controller: start. error adding device: \(error)")
    }

    if let layer = AVCaptureVideoPreviewLayer(session: captureSession) {
        self.previewLayer = layer
        layer.videoGravity = AVLayerVideoGravityResizeAspect

        if let layerConnection = layer.connection {
            if UIDevice.currentDevice().orientation == .LandscapeRight {
                layerConnection.videoOrientation = AVCaptureVideoOrientation.LandscapeLeft
            } else if UIDevice.currentDevice().orientation == .LandscapeLeft {
                layerConnection.videoOrientation = AVCaptureVideoOrientation.LandscapeRight
            } else if UIDevice.currentDevice().orientation == .Portrait {
                layerConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
            }
        }

        // TODO: Set the output size of the Preview Layer here
        layer.frame = previewFrame
        self.view.layer.insertSublayer(layer, atIndex: 0)

    }

    let bufferVideoQueue = dispatch_queue_create("sample buffer delegate", DISPATCH_QUEUE_SERIAL)
    let videoOutput = AVCaptureVideoDataOutput()
    videoOutput.setSampleBufferDelegate(self, queue: bufferVideoQueue)
    captureSession.addOutput(videoOutput)
    if let connection = videoOutput.connectionWithMediaType(AVMediaTypeVideo) {
        self.outputConnection = connection
    }

    let bufferAudioQueue = dispatch_queue_create("audio buffer delegate", DISPATCH_QUEUE_SERIAL)
    let audioOutput = AVCaptureAudioDataOutput()
    audioOutput.setSampleBufferDelegate(self, queue: bufferAudioQueue)
    captureSession.addOutput(audioOutput)

    captureSession.startRunning()
}

func getAssetWriter() -> AVAssetWriter? {
    NSLog("Video Controller: getAssetWriter")
    let fileManager = NSFileManager.defaultManager()
    let urls = fileManager.URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask)
    guard let documentDirectory: NSURL = urls.first else {
        print("Video Controller: getAssetWriter: documentDir Error")
        return nil
    }

    let local_video_name = NSUUID().UUIDString + ".mp4"
    self.videoOutputURL = documentDirectory.URLByAppendingPathComponent(local_video_name)

    guard let url = self.videoOutputURL else {
        return nil
    }


    self.assetWriter = try? AVAssetWriter(URL: url, fileType: AVFileTypeMPEG4)

    guard let writer = self.assetWriter else {
        return nil
    }

    let videoSettings: [String : AnyObject] = [
        AVVideoCodecKey  : AVVideoCodecH264,
        AVVideoWidthKey  : captureSize.width,
        AVVideoHeightKey : captureSize.height,
    ]

    assetWriterInputCamera = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
    assetWriterInputCamera?.expectsMediaDataInRealTime = true
    writer.addInput(assetWriterInputCamera!)

    let audioSettings : [String : AnyObject] = [
        AVFormatIDKey : NSInteger(kAudioFormatMPEG4AAC),
        AVNumberOfChannelsKey : 2,
        AVSampleRateKey : NSNumber(double: 44100.0)
    ]

    assetWriterInputAudio = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioSettings)
    assetWriterInputAudio?.expectsMediaDataInRealTime = true
    writer.addInput(assetWriterInputAudio!)

    return writer
}

func configurePreset() {
    NSLog("Video Controller: configurePreset")
    if captureSession.canSetSessionPreset(AVCaptureSessionPreset1280x720) {
        captureSession.sessionPreset = AVCaptureSessionPreset1280x720
    } else {
        captureSession.sessionPreset = AVCaptureSessionPreset1920x1080
    }
}

func startRecording() {
    NSLog("Video Controller: Start recording")

    captureSize = UIDeviceOrientationIsLandscape(UIDevice.currentDevice().orientation) ? CAPTURE_SIZE_LANDSCAPE : CAPTURE_SIZE_PORTRAIT

    if let connection = self.outputConnection {

        if connection.supportsVideoOrientation {

            if UIDevice.currentDevice().orientation == .LandscapeRight {
                connection.videoOrientation = AVCaptureVideoOrientation.LandscapeLeft
                NSLog("orientation: right")
            } else if UIDevice.currentDevice().orientation == .LandscapeLeft {
                connection.videoOrientation = AVCaptureVideoOrientation.LandscapeRight
                NSLog("orientation: left")
            } else {
                connection.videoOrientation = AVCaptureVideoOrientation.Portrait
                NSLog("orientation: portrait")
            }
        }
    }

    if let writer = getAssetWriter() {
        self.assetWriter = writer

        let recordingClock = self.captureSession.masterClock
        writer.startWriting()
        writer.startSessionAtSourceTime(CMClockGetTime(recordingClock))
    }

}

func stopRecording() {
    NSLog("Video Controller: Stop recording")

    if let writer = self.assetWriter {
        writer.finishWritingWithCompletionHandler{Void in
            print("Recording finished")
            // TODO: Handle the video file, copy it from the temp directory etc.
        }
    }
}

//MARK: Implementation for AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {

    if !self.isRecordingStarted {
        return
    }

    if let audio = self.assetWriterInputAudio where connection.audioChannels.count > 0 && audio.readyForMoreMediaData {

        dispatch_async(audioQueue!) {
            audio.appendSampleBuffer(sampleBuffer)
        }
        return
    }

    if let camera = self.assetWriterInputCamera where camera.readyForMoreMediaData {
        dispatch_async(videoQueue!) {
            camera.appendSampleBuffer(sampleBuffer)
        }
    }
}
}

其他修改信息

从我们在评论中的额外对话看来,您想要的是减少输出视频的物理尺寸,同时保持尺寸尽可能高(以保持质量)。请记住,在屏幕上放置图层的大小是POINT,而不是PIXELS。您以像素为单位编写输出文件 - 它与iPhone屏幕参考单位的比较不是1:1。

要减小输出文件的大小,您有两个简单的选项:

  1. 降低分辨率 - 但是如果你太小,你在播放时会失去质量,特别是在播放它时它会再次放大。尝试使用640x360或720x480作为输出像素。
  2. 调整压缩设置。 iPhone具有默认设置,通常可以生成更高质量(更大输出文件大小)的视频。
  3. 使用这些选项替换视频设置,看看你如何:

        let videoSettings: [String : AnyObject] = [
            AVVideoCodecKey  : AVVideoCodecH264,
            AVVideoWidthKey  : captureSize.width,
            AVVideoHeightKey : captureSize.height,
            AVVideoCompressionPropertiesKey : [
                AVVideoAverageBitRateKey : 2000000,
                AVVideoProfileLevelKey : H264_Main_4_1,
                AVVideoMaxKeyFrameIntervalKey : 90,
            ]
        ]
    

    AVCompressionProperties告诉AVFoundation如何实际压缩视频。比特率越低,压缩越高(因此流更好但是它使用的磁盘空间越少但是它的质量会降低)。 MaxKeyFrame间隔是写出未压缩帧的频率,将其设置得更高(在我们的每秒约30帧视频90中每1.5秒一次)也会降低质量,但也会减小尺寸。您将找到此处引用的常量https://developer.apple.com/library/prerelease/ios/documentation/AVFoundation/Reference/AVFoundation_Constants/index.html#//apple_ref/doc/constant_group/Video_Settings