I'm trying to upsample a cvpixelbuffer containing depthData to match the resolution of the associated color image.
In order to do this I first convert the existing cvpixelbuffer containing depth data into a ciimage and then upsample the ciimage. The upsampled ciimage is converted to a cgimage from which I plan on extracting pixel values as a cvpixelbuffer containing disparity floats. Below is the cgimage extension I wrote to accomplish this last part.
func depthBuffer() -> CVPixelBuffer? {
let frameSize = CGSize(width: self.width, height: self.height)
//COLOR IS BGRA
var pixelBuffer:CVPixelBuffer? = nil
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(frameSize.width), Int(frameSize.height), kCVPixelFormatType_DisparityFloat32, nil, &pixelBuffer)
if status != kCVReturnSuccess {
return nil
}
CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags.init(rawValue: 0))
let data = CVPixelBufferGetBaseAddress(pixelBuffer!)
let ColorSpace = CGColorSpaceCreateDeviceGray()
let bitmapInfo = CGBitmapInfo(rawValue: CGBitmapInfo.byteOrder32Big.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)
let context = CGContext(data: data, width: Int(frameSize.width), height: Int(frameSize.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: ColorSpace, bitmapInfo: bitmapInfo.rawValue)
context?.draw(self, in: CGRect(x: 0, y: 0, width: self.width, height: self.height))
CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pixelBuffer
}
But the pixel buffer retrieved does not contain depth information. Is my code above wrong? I'm relatively new to swift and image processing and would appreciate any help. THANK YOU!