如何在Swift中直接从CIImage而不是UIImage制作CVPixelBuffer?

时间:2019-01-24 19:36:45

标签: swift uiimage avassetwriter ciimage cvpixelbuffer

我正在通过iPhone摄像机录制经过过滤的视频,并且在录制时将CIImage实时转换为UIImage时,CPU使用率大大增加。我制作CVPixelBuffer的缓冲区函数使用UIImage,到目前为止,我需要进行此转换。我想做一个缓冲功能,如果可能的话可以使用CIImage,这样我就可以跳过从UIImage到CIImage的转换。我认为这将在录制视频时极大地提高性能,因为在CPU和GPU之间不会有任何麻烦。

这就是我现在所拥有的。在我的captureOutput函数中,我从CIImage创建一个UIImage,它是过滤后的图像。我使用UIImage从缓冲区函数创建一个CVPixelBuffer,并将其附加到assetWriter的pixelBufferInput:

let imageUI = UIImage(ciImage: ciImage)

let filteredBuffer:CVPixelBuffer? = buffer(from: imageUI)

let success = self.assetWriterPixelBufferInput?.append(filteredBuffer!, withPresentationTime: self.currentSampleTime!)

使用UIImage的缓冲函数:

func buffer(from image: UIImage) -> CVPixelBuffer? {
    let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
    var pixelBuffer : CVPixelBuffer?
    let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)

    guard (status == kCVReturnSuccess) else {
        return nil
    }

    CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
    let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)

    let videoRecContext = CGContext(data: pixelData,
                            width: Int(image.size.width),
                            height: Int(image.size.height),
                            bitsPerComponent: 8,
                            bytesPerRow: videoRecBytesPerRow,
                            space: (MTLCaptureView?.colorSpace)!, // It's getting the current colorspace from a MTKView
                            bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)

    videoRecContext?.translateBy(x: 0, y: image.size.height)
    videoRecContext?.scaleBy(x: 1.0, y: -1.0)

    UIGraphicsPushContext(videoRecContext!)
    image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
    UIGraphicsPopContext()
    CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))

    return pixelBuffer
}

3 个答案:

答案 0 :(得分:3)

rob mayoff 答案进行了总结,但需要牢记 VERY-VERY-VERY 重要事项:

Core Image推迟渲染,直到客户端请求访问帧缓冲区,即CVPixelBufferLockBaseAddress

我是从与Apple的技术支持工程师交谈中学到的,而在任何文档中都找不到。我仅在macOS上使用了此功能,但可以想象在iOS上不会有什么不同。

请记住,如果在渲染之前锁定缓冲区,缓冲区仍然可以使用,但是将在后面运行一帧,并且第一个渲染将为空。

最后,在SO上甚至在该线程中也多次提到:避免为每个渲染创建新的CVPixelBuffer,因为每个缓冲区占用大量系统资源。这就是为什么我们在其框架中拥有CVPixelBufferPool – Apple uses的原因,因此您可以获得更好的性能! ✌️

答案 1 :(得分:2)

创建一个CIContext,并使用它使用CIContext.render(_: CIImage, to buffer: CVPixelBuffer)CIImage直接呈现到您的CVPixelBuffer

答案 2 :(得分:1)

为了扩展我从rob mayoff那里得到的答案,我将在下面显示我的更改:

在captureOutput函数中,我将代码更改为:

let filteredBuffer : CVPixelBuffer? = buffer(from: ciImage)

filterContext?.render(_:ciImage, to:filteredBuffer!)

let success = self.assetWriterPixelBufferInput?.append(filteredBuffer!, withPresentationTime: self.currentSampleTime!)

请注意,缓冲区函数传递了ciImage。我对缓冲区函数进行了格式化以传递CIImage,并且能够摆脱其中的很多内容:

func buffer(from image: CIImage) -> CVPixelBuffer? {
    let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
    var pixelBuffer : CVPixelBuffer?
    let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.extent.width), Int(image.extent.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)

    guard (status == kCVReturnSuccess) else {
        return nil
    }

    return pixelBuffer
}