使用AVFoundation录制方形视频并添加水印

时间:2016-04-20 12:29:16

标签: ios swift video avfoundation

Illustration of what I'm trying to do

我试图执行以下操作:

  • 播放音乐
  • 录制一个方形视频(我在视图中有一个容器,显示您正在录制的内容)
  • 在顶部添加标签,应用程序图标&方形视频左下角的名称。

到目前为止,我设法播放音乐,将AVCaptureVideoPreviewLayer显示在不同视图的方形容器中,并将视频保存到相机胶卷。

问题是我几乎找不到一些关于使用AVFoundation的模糊教程,这是我的第一个应用程序,让事情变得非常困难。

我设法做了这些事情,但我仍然不明白AVFoundation是如何运作的。对于初学者来说,文档是模糊的,我还没有找到我特别想要的教程,并且将多个教程组合在一起(并且用Obj C编写)使得这不可能。我的问题如下:

  1. 视频无法保存为方形。 (提到应用程序不支持横向)
  2. 视频没有音频。 (我认为我应该添加除视频之外的某种音频输入)
  3. 如何在视频中添加水印?
  4. 我有一个错误:我创建了一个视图(messageView;在代码中看到),带有文本&图像让用户知道视频已保存到相机胶卷。但是如果我第二次开始录制,视频会在视频录制时显示,而不是在录制之后。我怀疑它与命名每个视频相同。
  5. 所以我做了准备:

    override func viewDidLoad() {
            super.viewDidLoad()
    
            // Preset For High Quality
            captureSession.sessionPreset = AVCaptureSessionPresetHigh
    
            // Get available devices capable of recording video
            let devices = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo) as! [AVCaptureDevice]
    
            // Get back camera
            for device in devices
            {
                if device.position == AVCaptureDevicePosition.Back
                {
                    currentDevice = device
                }
            }
    
            // Set Input
            let captureDeviceInput: AVCaptureDeviceInput
            do
            {
                captureDeviceInput = try AVCaptureDeviceInput(device: currentDevice)
            }
            catch
            {
                print(error)
                return
            }
    
            // Set Output
            videoFileOutput = AVCaptureMovieFileOutput()
    
            // Configure Session w/ Input & Output Devices
            captureSession.addInput(captureDeviceInput)
            captureSession.addOutput(videoFileOutput)
    
            // Show Camera Preview
            cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
            view.layer.addSublayer(cameraPreviewLayer!)
            cameraPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
            let width = view.bounds.width*0.85
            cameraPreviewLayer?.frame = CGRectMake(0, 0, width, width)
    
            // Bring Record Button To Front
            view.bringSubviewToFront(recordButton)
            captureSession.startRunning()
    
    //        // Bring Message To Front
    //        view.bringSubviewToFront(messageView)
    //        view.bringSubviewToFront(messageText)
    //        view.bringSubviewToFront(messageImage)
        }
    

    然后当我按下录制按钮时:

    @IBAction func capture(sender: AnyObject) {
        if !isRecording
        {
            isRecording = true
    
            UIView.animateWithDuration(0.5, delay: 0.0, options: [.Repeat, .Autoreverse, .AllowUserInteraction], animations: { () -> Void in
                self.recordButton.transform = CGAffineTransformMakeScale(0.5, 0.5)
                }, completion: nil)
    
            let outputPath = NSTemporaryDirectory() + "output.mov"
            let outputFileURL = NSURL(fileURLWithPath: outputPath)
            videoFileOutput?.startRecordingToOutputFileURL(outputFileURL, recordingDelegate: self)
        }
        else
        {
            isRecording = false
    
            UIView.animateWithDuration(0.5, delay: 0, options: [], animations: { () -> Void in
                self.recordButton.transform = CGAffineTransformMakeScale(1.0, 1.0)
                }, completion: nil)
            recordButton.layer.removeAllAnimations()
            videoFileOutput?.stopRecording()
        }
    }
    

    录制视频后:

    func captureOutput(captureOutput: AVCaptureFileOutput!, didFinishRecordingToOutputFileAtURL outputFileURL: NSURL!, fromConnections connections: [AnyObject]!, error: NSError!) {
        let outputPath = NSTemporaryDirectory() + "output.mov"
        if UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(outputPath)
        {
            UISaveVideoAtPathToSavedPhotosAlbum(outputPath, self, nil, nil)
            // Show Success Message
            UIView.animateWithDuration(0.4, delay: 0, options: [], animations: {
                self.messageView.alpha = 0.8
                }, completion: nil)
            UIView.animateWithDuration(0.4, delay: 0, options: [], animations: {
                self.messageText.alpha = 1.0
                }, completion: nil)
            UIView.animateWithDuration(0.4, delay: 0, options: [], animations: {
                self.messageImage.alpha = 1.0
                }, completion: nil)
            // Hide Message
            UIView.animateWithDuration(0.4, delay: 1, options: [], animations: {
                self.messageView.alpha = 0
                }, completion: nil)
            UIView.animateWithDuration(0.4, delay: 1, options: [], animations: {
                self.messageText.alpha = 0
                }, completion: nil)
            UIView.animateWithDuration(0.4, delay: 1, options: [], animations: {
                self.messageImage.alpha = 0
                }, completion: nil)
        }
    }
    

    那么我需要做些什么来解决这个问题呢?我一直在搜索和查看教程,但我无法弄明白......我读到了关于添加水印的内容,我发现它与在视频上添加CALayers有关。但显然我无法做到这一点,因为我甚至不知道如何制作视频方块并添加音频。

1 个答案:

答案 0 :(得分:3)

一些事情:

就音频而言,您正在添加视频(相机)输入,但没有音频输入。这样做是为了获得声音。

    let audioInputDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio)

    do {
        let input = try AVCaptureDeviceInput(device: audioInputDevice)

        if sourceAVFoundation.captureSession.canAddInput(input) {
            sourceAVFoundation.captureSession.addInput(input)
        } else {
            NSLog("ERROR: Can't add audio input")
        }
    } catch let error {
        NSLog("ERROR: Getting input device: \(error)")
    }

要使视频正方形,您将不得不考虑使用AVAssetWriter而不是AVCaptureFileOutput。这样更复杂,但你获得了更多的权力"。你已经创建了一个很棒的AVCaptureSession,为了连接AssetWriter,你需要做这样的事情:

    let fileManager = NSFileManager.defaultManager()
    let urls = fileManager.URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask)
    guard let documentDirectory: NSURL = urls.first else {
        print("Video Controller: getAssetWriter: documentDir Error")
        return nil
    }

    let local_video_name = NSUUID().UUIDString + ".mp4"
    self.videoOutputURL = documentDirectory.URLByAppendingPathComponent(local_video_name)

    guard let url = self.videoOutputURL else {
        return nil
    }


    self.assetWriter = try? AVAssetWriter(URL: url, fileType: AVFileTypeMPEG4)

    guard let writer = self.assetWriter else {
        return nil
    }

    //TODO: Set your desired video size here! 
    let videoSettings: [String : AnyObject] = [
        AVVideoCodecKey  : AVVideoCodecH264,
        AVVideoWidthKey  : captureSize.width,
        AVVideoHeightKey : captureSize.height,
        AVVideoCompressionPropertiesKey : [
            AVVideoAverageBitRateKey : 200000,
            AVVideoProfileLevelKey : AVVideoProfileLevelH264Baseline41,
            AVVideoMaxKeyFrameIntervalKey : 90,
        ],
    ]

    assetWriterInputCamera = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
    assetWriterInputCamera?.expectsMediaDataInRealTime = true
    writer.addInput(assetWriterInputCamera!)

    let audioSettings : [String : AnyObject] = [
        AVFormatIDKey : NSInteger(kAudioFormatMPEG4AAC),
        AVNumberOfChannelsKey : 2,
        AVSampleRateKey : NSNumber(double: 44100.0)
    ]

    assetWriterInputAudio = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioSettings)
    assetWriterInputAudio?.expectsMediaDataInRealTime = true
    writer.addInput(assetWriterInputAudio!)

一旦你设置了AssetWriter ......然后连接视频和音频的一些输出

    let bufferAudioQueue = dispatch_queue_create("audio buffer delegate", DISPATCH_QUEUE_SERIAL)
    let audioOutput = AVCaptureAudioDataOutput()
    audioOutput.setSampleBufferDelegate(self, queue: bufferAudioQueue)
    captureSession.addOutput(audioOutput)

    // Always add video last...
    let videoOutput = AVCaptureVideoDataOutput()
    videoOutput.setSampleBufferDelegate(self, queue: bufferVideoQueue)
    captureSession.addOutput(videoOutput)
    if let connection = videoOutput.connectionWithMediaType(AVMediaTypeVideo) {
        if connection.supportsVideoOrientation {
            // Force recording to portrait
            connection.videoOrientation = AVCaptureVideoOrientation.Portrait
        }

        self.outputConnection = connection
    }


    captureSession.startRunning()

最后,您需要捕获缓冲区并处理这些内容...确保您的类成为AVCaptureVideoDataOutputSampleBufferDelegate和AVCaptureAudioDataOutputSampleBufferDelegate

的委托
//MARK: Implementation for AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {

    if !self.isRecordingStarted {
        return
    }

    if let audio = self.assetWriterInputAudio where connection.audioChannels.count > 0 && audio.readyForMoreMediaData {

        dispatch_async(audioQueue!) {
            audio.appendSampleBuffer(sampleBuffer)
        }
        return
    }

    if let camera = self.assetWriterInputCamera where camera.readyForMoreMediaData {
        dispatch_async(videoQueue!) {
            camera.appendSampleBuffer(sampleBuffer)
        }
    }
}

有一些缺失的部分,但希望这足以让你与文档一起弄明白。

最后,如果你想添加水印,有很多方法可以实时完成,但一种可能的方法是修改sampleBuffer然后将水印写入图像。您将在StackOverflow上找到处理该问题的其他问题。