AVAssetWriter队列指导Swift 3

时间:2017-06-18 15:19:38

标签: swift queue avfoundation avassetwriter

有人可以给我一些关于在AVFoundation中使用队列的指导吗?

稍后在我的应用程序中,我想对单个帧进行一些处理,因此我需要使用AVCaptureVideoDataOutput。

首先,我想我会捕获图像,然后使用AVAssetWriter编写它们(未处理)。

我通过设置AVCapture会话成功地将帧从相机传输到图像预览:

func initializeCameraAndMicrophone() {

// set up the captureSession
    captureSession = AVCaptureSession()
    captureSession.sessionPreset = AVCaptureSessionPreset1280x720 // set resolution to Medium

// set up the camera
    let camera = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)

    do {
        let cameraInput = try AVCaptureDeviceInput(device: camera)
        if captureSession.canAddInput(cameraInput){
            captureSession.addInput(cameraInput)
        }
    } catch {
        print("Error setting device camera input: \(error)")
        return
    }

    videoOutputStream.setSampleBufferDelegate(self, queue: DispatchQueue(label: "sampleBuffer", attributes: []))

    if captureSession.canAddOutput(videoOutputStream)
    {
        captureSession.addOutput(videoOutputStream)
    }

    captureSession.startRunning()
}

然后每个新帧触发captureOutput Delegate:

func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)
{
    let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
    let cameraImage = CIImage(cvPixelBuffer: pixelBuffer!)
    let bufferImage = UIImage(ciImage: cameraImage)

    DispatchQueue.main.async
      {
        // send captured frame to the videoPreview
        self.videoPreview.image = bufferImage


        // if recording is active append bufferImage to video frame
        while (recordingNow == true){

            print("OK we're recording!")

            /// Append images to video 
            while (writerInput.isReadyForMoreMediaData) {

                let lastFrameTime = CMTimeMake(Int64(frameCount), videoFPS)
                let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)

                pixelBufferAdaptor.append(pixelBuffer!, withPresentationTime: presentationTime)


                frameCount += 1              
            }
        }
    }
}

因此,这会将帧流式传输到图像预览,直到我按下调用" startVideoRecording" function(设置AVAssetWriter)。从那时起,委托再也不会被召唤!

AVAssetWriter的设置方式如下:

func startVideoRecording() {


    guard let assetWriter = createAssetWriter(path: filePath!, size: videoSize) else {
        print("Error converting images to video: AVAssetWriter not created")
        return
    }

    // AVAssetWriter exists so create AVAssetWriterInputPixelBufferAdaptor
    let writerInput = assetWriter.inputs.filter{ $0.mediaType == AVMediaTypeVideo }.first!


    let sourceBufferAttributes : [String : AnyObject] = [
        kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_32ARGB) as AnyObject,
        kCVPixelBufferWidthKey as String : videoSize.width as AnyObject,
        kCVPixelBufferHeightKey as String : videoSize.height as AnyObject,
        ]

    let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writerInput, sourcePixelBufferAttributes: sourceBufferAttributes)

    // Start writing session
    assetWriter.startWriting()
    assetWriter.startSession(atSourceTime: kCMTimeZero)
    if (pixelBufferAdaptor.pixelBufferPool == nil) {
        print("Error converting images to video: pixelBufferPool nil after starting session")

        assetWriter.finishWriting{
            print("assetWritter stopped!")
        }
        recordingNow = false

        return
    }

    frameCount = 0

    print("Recording started!")
}

我是AVFoundation的新手,但我怀疑我在某个地方搞砸了我的队列。

1 个答案:

答案 0 :(得分:1)

您必须使用单独的串行队列来捕获视频/音频。

  1. 将此队列属性添加到您的班级:

    let captureSessionQueue: DispatchQueue = DispatchQueue(label: "sampleBuffer", attributes: [])
    
  2. 根据Apple文档,在captureSessionQueue上启动会话:      startRunning()方法是一个阻塞调用,可能需要一些时间,因此你应该这样做      在串行队列上执行会话设置,以便不阻止主队列(这使UI保持响应)。

    captureSessionQueue.async {
        captureSession.startRunning()
    }
    
  3. 将此队列设置为捕获输出像素缓冲区委托:

    videoOutputStream.setSampleBufferDelegate(self, queue: captureSessionQueue)
    
  4. 在captureSessionQueue中调用startVideoRecording:

    captureSessionQueue.async {
        startVideoRecording()
    }
    
  5. 在captureOutput委托方法中,将所有AVFoundation方法调用放入captureSessionQueue.async:

    DispatchQueue.main.async
      {
    
        // send captured frame to the videoPreview
        self.videoPreview.image = bufferImage
    
        captureSessionQueue.async {
            // if recording is active append bufferImage to video frame
            while (recordingNow == true){
    
                print("OK we're recording!")
    
                /// Append images to video 
                while (writerInput.isReadyForMoreMediaData) {
    
                    let lastFrameTime = CMTimeMake(Int64(frameCount), videoFPS)
                    let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)
    
                    pixelBufferAdaptor.append(pixelBuffer!, withPresentationTime: presentationTime)
    
    
                    frameCount += 1              
                }
            }
        }
    }