如何提高AVCapturePhotoOutput性能

时间:2016-12-19 21:22:14

标签: swift avcapture

我想要做的是在Xcode / swift3项目中使用AVCaptureSession捕获iPhone相机帧。

我所做的是实现AVCaptureSession,AVCapturePhotoOutput和填充对象。它工作得很好,为每个帧调用didOutputSampleBuffer委托。 我现在要做的是在每一帧上做一个简单的任务:我只是想制定一个门槛。这很简单,我只需要迭代一次所有的帧像素。

我已经阅读了一些教程,解释了如何将原始指针转换为UIImage并在UIIMageView中显示结果。

但这很慢。我不明白为什么,因为我的任务中没有任何东西:只是一个阈值和一些转换图像的东西。

你知道我是犯了错误,还是有更好的方法呢?

由于

class MyClass: AVCaptureVideoDataOutputSampleBufferDelegate
{
    func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)
    {

        connection.videoOrientation = .portrait
        connection.isVideoMirrored = true

        let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!


        CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))

        let width = CVPixelBufferGetWidth( pixelBuffer )
        let height = CVPixelBufferGetHeight( pixelBuffer )
        let bytesPerRow = CVPixelBufferGetBytesPerRow( pixelBuffer )

        let black = PixelData(a:255,r:0,g:0,b:0)
        var pixelData = [PixelData](repeating: black, count: Int(width * height))

        if let baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer)
        {
            let buf = baseAddress.assumingMemoryBound(to: UInt8.self)

            var cpt = 0

            for y in 0..<height
            {
                for x in 0..<width
                {
                    let idx = x + y * width

                    if buf[ bytesPerRow*y + x*4 + 2] > 150 && buf[ bytesPerRow*y + x*4 + 1] < 150 && buf[ bytesPerRow*y + x*4 + 0] < 150
                    {
                         pixelData[ idx ].r = 0
                         pixelData[ idx ].g = 255
                         pixelData[ idx ].b = 0
                         cpt = cpt + 1
                    }
                    else
                    {
                        pixelData[ idx ].r = 0
                        pixelData[ idx ].g = 0
                        pixelData[ idx ].b = 0
                    }
                }
            }

        }


        var data = pixelData
        let providerRef = CGDataProvider(
            data: NSData(bytes: &data, length: data.count * MemoryLayout<PixelData>.size)
        )

        let cgim = CGImage(
            width: width,
            height: height,
            bitsPerComponent: 8,
            bitsPerPixel: 32, 
            bytesPerRow: width * (MemoryLayout<PixelData>.size),
            space: CGColorSpaceCreateDeviceRGB(),
            bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedFirst.rawValue),
            provider: providerRef!,
            decode: nil,
            shouldInterpolate: true,
            intent: .defaultIntent
        )
        let image = UIImage(cgImage: cgim!)

        DispatchQueue.main.async { [unowned self] in
            self.myimageview.image = image
        }

        CVPixelBufferUnlockBaseAddress( pixelBuffer, CVPixelBufferLockFlags(rawValue: 0) )

    }
}

0 个答案:

没有答案