如何创建深度数据并将其添加到图像?

时间:2019-05-28 11:31:26

标签: ios swift avdepthdata

对不起,我重复了这个问题How to build AVDepthData manually,因为它没有我想要的答案,而且我的代表也没有足够的意见。如果您不介意,我将来可以删除我的问题,然后请其他人针对该主题提出未来的答案。

因此,我的目标是创建深度数据并将其附加到任意图像。有一篇有关如何https://developer.apple.com/documentation/avfoundation/avdepthdata/creating_auxiliary_depth_data_manually的文章,但是我不知道如何实现它的任何步骤。我不会一次发布所有问题,而是从第一个问题开始。

第一步,必须将每个像素的深度图像从灰度转换为深度或视差值。我从前面提到的主题中摘录了这段代码:

func buildDepth(image: UIImage) -> AVDepthData? {
        let width = Int(image.size.width)
        let height = Int(image.size.height)
        var maybeDepthMapPixelBuffer: CVPixelBuffer?
        let status = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_DisparityFloat32, nil, &maybeDepthMapPixelBuffer)

        guard status == kCVReturnSuccess, let depthMapPixelBuffer = maybeDepthMapPixelBuffer else {
            return nil
        }

        CVPixelBufferLockBaseAddress(depthMapPixelBuffer, .init(rawValue: 0))

        guard let baseAddress = CVPixelBufferGetBaseAddress(depthMapPixelBuffer) else {
            return nil
        }

        let buffer = unsafeBitCast(baseAddress, to: UnsafeMutablePointer<Float32>.self)

        for i in 0..<width * height {
            buffer[i] = 0 // disparity must be calculated somehow, but set to 0 for testing purposes
        }

        CVPixelBufferUnlockBaseAddress(depthMapPixelBuffer, .init(rawValue: 0))

        let info: [AnyHashable: Any] = [kCGImagePropertyPixelFormat: kCVPixelFormatType_DisparityFloat32,
                                        kCGImagePropertyWidth: image.size.width,
                                        kCGImagePropertyHeight: image.size.height,
                                        kCGImagePropertyBytesPerRow: CVPixelBufferGetBytesPerRow(depthMapPixelBuffer)]

        let metadata = generateMetadata(image: image)
        let dic: [AnyHashable: Any] = [kCGImageAuxiliaryDataInfoDataDescription: info,
// I get an error when converting baseAddress to CFData
                                       kCGImageAuxiliaryDataInfoData: baseAddress as! CFData,
                                       kCGImageAuxiliaryDataInfoMetadata: metadata]

        guard let depthData = try? AVDepthData(fromDictionaryRepresentation: dic) else {
            return nil
        }

        return depthData
    }

然后,文章说将像素缓冲区的基地址(在其中是视差图)作为CFData加载,并将其作为kCGImageAuxiliaryDataInfoData值传递到CFDictionary中。但是在将baseAddress转换为CFData时出现错误。我也尝试转换像素缓冲区本身,但是没有运气。我必须作为kCGImageAuxiliaryDataInfoData传递什么?首先我是否正确创建了视差缓冲区?

除了这个问题之外,如果有人可以指导我介绍一些有关如何执行整个操作的示例代码,那将很酷。

0 个答案:

没有答案