Swift:调整图像大小是一种解决方案吗?

时间:2015-08-26 05:06:18

标签: ios swift

我正在分析相机拍摄的图像,我逐个像素地浏览并将每个像素的rgb值存储在字典中,显然它需要花费很多时间。我可以缩小图像而不会丢失图像中颜色比例的百分比......

示例:原始图像具有RGB颜色并计为

                     Color           count
                   [0xF0F0F0]         100
                   [0xB0B0B0]          50
                   [0x909090]          20

然后将图像缩放一半后,缩放图像中的颜色计数如下:

                     Color           count
                   [0xF0F0F0]          50
                   [0xB0B0B0]          25
                   [0x909090]          10

所以2个问题:  这在Swift中可行吗?  2.如果是,那么如何在swift中缩放图像

5 个答案:

答案 0 :(得分:0)

这是我调整相机拍摄图像大小的代码,在此过程之后我将其发送到我们的服务器。我建议它调整大小而不是消除颜色。

此外,如果您不想调整大小,则无法将图像转换为JPEG并指定质量。

使用UIImageJPEGRepresentation代替UIImagePNGRepresentation

if let img = value as? UIImage {

                var newWidth = 0
                var newHeight = 0
                var sizeLimit = 700 //in px

                let originalWidth = Int(img.size.width)
                let originalHeight = Int(img.size.height)

                //Get new size with max w/h = 700
                if originalWidth > originalHeight {
                    //Max width
                    newWidth = sizeLimit
                    newHeight = (originalHeight*sizeLimit)/originalWidth
                }else{
                    newWidth = (originalWidth*sizeLimit)/originalHeight
                    newHeight = sizeLimit
                }

                let newSize = CGSizeMake(CGFloat(newWidth), CGFloat(newHeight))
                UIGraphicsBeginImageContext(newSize)
                img.drawInRect(CGRectMake(0, 0, CGFloat(newWidth), CGFloat(newHeight)))
                let newImg = UIGraphicsGetImageFromCurrentImageContext()
                UIGraphicsEndImageContext()

                //This is a UIIMAGE
                let imageData:UIImage = UIImagePNGRepresentation(newImg)
            }

答案 1 :(得分:0)

缩小图像肯定会丢失一些像素。

在Swift中缩放图像:

var scaledImage // your scaled-down image
var origImage   // your original image

let itemSize = CGSizeMake(origImage.size.width/2, origImage.size.height/2)
        UIGraphicsBeginImageContextWithOptions(itemSize, false, UIScreen.mainScreen().scale)
let imageRect = CGRectMake(0.0, 0.0, itemSize.width, itemSize.height)
origImage.drawInRect(imageRect)
scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()

答案 2 :(得分:0)

我想说这可能是不可能的,因为调整图像大小会改变实际颜色的值,所以它会给你不同的结果。

但也许您可以考虑使用全尺寸图像加快速度。查看可能使用metal进行GPU加速计算

答案 3 :(得分:0)

  
      
  1. 这在Swift中可行吗?
  2.   

不确定

  
      
  1. 如果是,那么如何在swift中缩放图像
  2.   

如果你不介意低级功能,因为这是你想要快速完成的,而不是GPU,你可能想要查看Accelerate框架,特别是vImage_scale。它还可以直接访问内存中的RGB数据。

当我在过去做过类似的事情时,我注意到缩放实际上会切断一些颜色并且不能准确表示原始颜色。不幸的是,我没有示例显示,但我建议您尝试使用不同缩放比率的此技术,看看它是否能为您提供令人满意的数据。

答案 4 :(得分:0)

func scaleAndRotateImage(image: UIImage, MaxResolution iIntMaxResolution: Int) -> UIImage {
        let kMaxResolution = iIntMaxResolution
        let imgRef = image.cgImage!
        let width: CGFloat = CGFloat(imgRef.width)
        let height: CGFloat = CGFloat(imgRef.height)
        var transform = CGAffineTransform.identity
        var bounds = CGRect.init(x: 0, y: 0, width: width, height: height)

        if Int(width) > kMaxResolution || Int(height) > kMaxResolution {
            let ratio: CGFloat = width / height
            if ratio > 1 {
                bounds.size.width = CGFloat(kMaxResolution)
                bounds.size.height = bounds.size.width / ratio
            }
            else {
                bounds.size.height = CGFloat(kMaxResolution)
                bounds.size.width = bounds.size.height * ratio
            }
        }
        let scaleRatio: CGFloat = bounds.size.width / width
        let imageSize = CGSize.init(width: CGFloat(imgRef.width), height: CGFloat(imgRef.height))

        var boundHeight: CGFloat
        let orient = image.imageOrientation
        // The output below is limited by 1 KB.
        // Please Sign Up (Free!) to remove this limitation.

        switch orient {
        case .up:
            //EXIF = 1
            transform = CGAffineTransform.identity
        case .upMirrored:
            //EXIF = 2
            transform = CGAffineTransform.init(translationX: imageSize.width, y: 0.0)
            transform = transform.scaledBy(x: -1.0, y: 1.0)

        case .down:
            //EXIF = 3
            transform = CGAffineTransform.init(translationX: imageSize.width, y: imageSize.height)
            transform = transform.rotated(by: CGFloat(Double.pi / 2))

        case .downMirrored:
            //EXIF = 4
            transform = CGAffineTransform.init(translationX: 0.0, y: imageSize.height)
            transform = transform.scaledBy(x: 1.0, y: -1.0)
        case .leftMirrored:
            //EXIF = 5
            boundHeight = bounds.size.height
            bounds.size.height = bounds.size.width
            bounds.size.width = boundHeight
            transform = CGAffineTransform.init(translationX: imageSize.height, y: imageSize.width)

            transform = transform.scaledBy(x: -1.0, y: 1.0)
            transform = transform.rotated(by: CGFloat(Double.pi / 2) / 2.0)
            break

        default: print("Error in processing image")
        }

        UIGraphicsBeginImageContext(bounds.size)
        let context = UIGraphicsGetCurrentContext()
        if orient == .right || orient == .left {
            context?.scaleBy(x: -scaleRatio, y: scaleRatio)
            context?.translateBy(x: -height, y: 0)
        }
        else {
            context?.scaleBy(x: scaleRatio, y: -scaleRatio)
            context?.translateBy(x: 0, y: -height)
        }
        context?.concatenate(transform)
        context?.draw(imgRef, in: CGRect.init(x: 0, y: 0, width: width, height: height))
        let imageCopy = UIGraphicsGetImageFromCurrentImageContext()
        UIGraphicsEndImageContext()
        return imageCopy!
    }