裁剪CMSampleBuffer并处理它而不转换为CGImage

时间:2019-02-20 18:11:23

标签: ios core-graphics metal core-image core-video

我一直遵循苹果的live stream camera editor代码来进行实时视频编辑。

到目前为止,还不错,但是我需要一种方法来将样本缓冲区裁剪为4个部分,然后使用不同的CIFilter处理这四个部分。例如,如果图像的大小为1000x1000,我想将CMSampleBuffer裁剪为4个大小为250x250的图像,然后对每个图像应用唯一的过滤器,将其转换回CMSammpleBuffer并显示在Metal View上。这是直到我可以在CGContext中裁剪CMSampleBuffer但无法将其转换回CMSampleBuffer的代码为止:

let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!

        CVPixelBufferLockBaseAddress(imageBuffer, .readOnly)

        let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
        let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
        let cropWidth = 640
        let cropHeight = 640
        let colorSpace = CGColorSpaceCreateDeviceRGB()

        let context = CGContext(data: baseAddress, width: cropWidth, height: cropHeight, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue)


        CVPixelBufferUnlockBaseAddress(imageBuffer, .readOnly)

        // create image
        let cgImage: CGImage = context!.makeImage()!
        let image = UIImage(cgImage: cgImage)

我不需要CGImage,我需要CMSampleBuffer或CVImageBuffer,因此可以将其传递给来自苹果的此链接的示例代码中使用的func render(pixelBuffer: CVPixelBuffer) -> CVPixelBuffer?类的函数FilterRenderer

0 个答案:

没有答案