Swift:将灰度图像转换为包含视差的CVPixelBuffer

时间:2018-07-07 14:51:29

标签: swift cvpixelbuffer

我有一个深度数据的灰度图像,该图像已从其原始分辨率进行了升采样。我对如何将放大后的深度图像(r,g,b)的像素值转换为浮点数感到困惑。

是否可以将像素的白度转换为浮点值?

反正我可以转换与图像关联的CVPixelBuffer的CVPixelBufferFormatTypes吗?

换一种说法,有没有一种方法可以将灰度图像的像素缓冲区转换为包含视差浮动的CVpixelbuffer?

我使用以下代码从上采样深度数据的cgimage表示中提取cvpixelbuffer:-

func pixelBuffer() -> CVPixelBuffer? {

    let frameSize = CGSize(width: self.width, height: self.height)

    //COLOR IS BGRA
    var pixelBuffer:CVPixelBuffer? = nil
    let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(frameSize.width), Int(frameSize.height), kCVPixelFormatType_32BGRA , nil, &pixelBuffer)

    if status != kCVReturnSuccess {
        return nil

    }

    CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags.init(rawValue: 0))
    let data = CVPixelBufferGetBaseAddress(pixelBuffer!)
    let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
    let bitmapInfo = CGBitmapInfo(rawValue: CGBitmapInfo.byteOrder32Big.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)
    let context = CGContext(data: data, width: Int(frameSize.width), height: Int(frameSize.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: bitmapInfo.rawValue)


    context?.draw(self, in: CGRect(x: 0, y: 0, width: self.width, height: self.height))

    CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))

    return pixelBuffer
}

0 个答案:

没有答案