我正在测试几种重新缩放UIImage的方法。
我已经测试了here发布的所有这些方法,并测量了调整图片大小所需的时间。
1)UIGraphicsBeginImageContextWithOptions& UIImage -drawInRect:
let image = UIImage(contentsOfFile: self.URL.path!)
let size = CGSizeApplyAffineTransform(image.size, CGAffineTransformMakeScale(0.5, 0.5))
let hasAlpha = false
let scale: CGFloat = 0.0 // Automatically use scale factor of main screen
UIGraphicsBeginImageContextWithOptions(size, !hasAlpha, scale)
image.drawInRect(CGRect(origin: CGPointZero, size: size))
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
2)CGBitmapContextCreate& CGContextDrawImage
let cgImage = UIImage(contentsOfFile: self.URL.path!).CGImage
let width = CGImageGetWidth(cgImage) / 2
let height = CGImageGetHeight(cgImage) / 2
let bitsPerComponent = CGImageGetBitsPerComponent(cgImage)
let bytesPerRow = CGImageGetBytesPerRow(cgImage)
let colorSpace = CGImageGetColorSpace(cgImage)
let bitmapInfo = CGImageGetBitmapInfo(cgImage)
let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo.rawValue)
CGContextSetInterpolationQuality(context, kCGInterpolationHigh)
CGContextDrawImage(context, CGRect(origin: CGPointZero, size: CGSize(width: CGFloat(width), height: CGFloat(height))), cgImage)
let scaledImage = CGBitmapContextCreateImage(context).flatMap { UIImage(CGImage: $0) }
3)CGImageSourceCreateThumbnailAtIndex
import ImageIO
if let imageSource = CGImageSourceCreateWithURL(self.URL, nil) {
let options: [NSString: NSObject] = [
kCGImageSourceThumbnailMaxPixelSize: max(size.width, size.height) / 2.0,
kCGImageSourceCreateThumbnailFromImageAlways: true
]
let scaledImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, options).flatMap { UIImage(CGImage: $0) }
}
4)Lanczos重新采样核心图像
let image = CIImage(contentsOfURL: self.URL)
let filter = CIFilter(name: "CILanczosScaleTransform")!
filter.setValue(image, forKey: "inputImage")
filter.setValue(0.5, forKey: "inputScale")
filter.setValue(1.0, forKey: "inputAspectRatio")
let outputImage = filter.valueForKey("outputImage") as! CIImage
let context = CIContext(options: [kCIContextUseSoftwareRenderer: false])
let scaledImage = UIImage(CGImage: self.context.createCGImage(outputImage, fromRect: outputImage.extent()))
5)加速中的vImage
let cgImage = UIImage(contentsOfFile: self.URL.path!).CGImage
// create a source buffer
var format = vImage_CGImageFormat(bitsPerComponent: 8, bitsPerPixel: 32, colorSpace: nil,
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.First.rawValue),
version: 0, decode: nil, renderingIntent: CGColorRenderingIntent.RenderingIntentDefault)
var sourceBuffer = vImage_Buffer()
defer {
sourceBuffer.data.dealloc(Int(sourceBuffer.height) * Int(sourceBuffer.height) * 4)
}
var error = vImageBuffer_InitWithCGImage(&sourceBuffer, &format, nil, cgImage, numericCast(kvImageNoFlags))
guard error == kvImageNoError else { return nil }
// create a destination buffer
let scale = UIScreen.mainScreen().scale
let destWidth = Int(image.size.width * 0.5 * scale)
let destHeight = Int(image.size.height * 0.5 * scale)
let bytesPerPixel = CGImageGetBitsPerPixel(image.CGImage) / 8
let destBytesPerRow = destWidth * bytesPerPixel
let destData = UnsafeMutablePointer<UInt8>.alloc(destHeight * destBytesPerRow)
defer {
destData.dealloc(destHeight * destBytesPerRow)
}
var destBuffer = vImage_Buffer(data: destData, height: vImagePixelCount(destHeight), width: vImagePixelCount(destWidth), rowBytes: destBytesPerRow)
// scale the image
error = vImageScale_ARGB8888(&sourceBuffer, &destBuffer, nil, numericCast(kvImageHighQualityResampling))
guard error == kvImageNoError else { return nil }
// create a CGImage from vImage_Buffer
let destCGImage = vImageCreateCGImageFromBuffer(&destBuffer, &format, nil, nil, numericCast(kvImageNoFlags), &error)?.takeRetainedValue()
guard error == kvImageNoError else { return nil }
// create a UIImage
let scaledImage = destCGImage.flatMap { UIImage(CGImage: $0, scale: 0.0, orientation: image.imageOrientation) }
经过几个小时的测试并测量每个方法将图像重新缩放到100x100所花费的时间,我的结论与NSHipster完全不同。首先,vImage in accelerate
比第一种方法慢200倍,在我看来是其他方式的表兄弟。核心图像方法也很慢。但我很感兴趣的是方法#1可以粉碎方法3,4和5,其中一些在理论上处理GPU上的东西。
方法#3花了2秒钟将1024x1024图像的大小调整为100x100。另一方面#1耗时0.01秒!
我错过了什么吗?
某些东西一定是错的,否则Apple不会花时间编写加速和CIImage的东西。
注意:我正在测量从图像加载到变量的时间到将缩放版本保存到另一个变量的时间。我没有考虑从文件中读取所需的时间。
答案 0 :(得分:5)
由于各种原因,加速可能是最慢的方法:
如果你真的认为Accelerate在这里不合适,请提交一个错误。我当然会在使用仪器时间档案时检查你在基准测试循环中花费了大部分时间在vImageScale中,然而这样做。