Apple的新CoreML框架具有一个CVPixelBuffer
的预测函数。为了对UIImage
进行分类,必须在两者之间进行转换。
我从Apple工程师处获得的转换代码:
1 // image has been defined earlier
2
3 var pixelbuffer: CVPixelBuffer? = nil
4
5 CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_OneComponent8, nil, &pixelbuffer)
6 CVPixelBufferLockBaseAddress(pixelbuffer!, CVPixelBufferLockFlags(rawValue:0))
7
8 let colorspace = CGColorSpaceCreateDeviceGray()
9 let bitmapContext = CGContext(data: CVPixelBufferGetBaseAddress(pixelbuffer!), width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelbuffer!), space: colorspace, bitmapInfo: 0)!
10
11 bitmapContext.draw(image.cgImage!, in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
这个解决方案很快,适用于灰度图像。必须根据图像类型进行的更改是:
kCVPixelFormatType_OneComponent8
到另一个OSType
(RGB为kCVPixelFormatType_32ARGB
)colorSpace
到另一个CGColorSpace
(RGB为CGColorSpaceCreateDeviceRGB
)bitsPerComponent
表示内存每像素的位数(RGB为32)bitmapInfo
到非零CGBitmapInfo
属性(默认为kCGBitmapByteOrderDefault
)答案 0 :(得分:23)
您可以查看本教程https://www.hackingwithswift.com/whats-new-in-ios-11,代码在Swift 4中
1 | 3 | 6
------------------
2 | 4 | 7
------------------
3 | 5 | 8
------------------
11 | 13 | 15
------------------
12 | 14 | 16