比较两个图像以检查它们是否相同

时间:2014-02-20 21:11:21

标签: ios objective-c uiimage nsdata

我有一个带有ImageView的Profile视图,用户可以在其中更改图片。我正在拯救我的老人和我新图像来比较它们。我想知道它们是否相同,所以如果它们我不需要将新的推送到我的服务器。

我尝试了这个,但它确实不起作用:

+ (NSData*)returnImageAsData:(UIImage *)anImage {
    // Get an NSData representation of our images. We use JPEG for the larger image
    // for better compression and PNG for the thumbnail to keep the corner radius transparency
    float i_width = 400.0f;
    float oldWidth = anImage.size.width;
    float scaleFactor = i_width / oldWidth;

    float newHeight = anImage.size.height * scaleFactor;
    float newWidth = oldWidth * scaleFactor;

    UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
    [anImage drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    NSData *imageData = UIImageJPEGRepresentation(newImage, 0.5f);

    return imageData;
}

+ (BOOL)image:(UIImage *)image1 isEqualTo:(UIImage *)image2
{

    NSData *data1 = [self returnImageAsData:image1];
    NSData *data2 = [self returnImageAsData:image2];

    return [data1 isEqual:data2];
}

知道如何检查两张图片是否相同?

最终结果:

+ (NSData*)returnImageAsData:(UIImage *)anImage {
    // Get an NSData representation of our images. We use JPEG for the larger image
    // for better compression and PNG for the thumbnail to keep the corner radius transparency
//    float i_width = 400.0f;
//    float oldWidth = anImage.size.width;
//    float scaleFactor = i_width / oldWidth;
//    
//    float newHeight = anImage.size.height * scaleFactor;
//    float newWidth = oldWidth * scaleFactor;
//    
//    UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
//    [anImage drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
//    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
//    UIGraphicsEndImageContext();

    NSData *imageData = UIImageJPEGRepresentation(anImage, 0.5f);

    return imageData;
}

+ (BOOL)image:(UIImage *)image1 isEqualTo:(UIImage *)image2
{
    CGSize size1 = image1.size;
    CGSize size2 = image2.size;

    if (CGSizeEqualToSize(size1, size2)) {
        return YES;
    }

    NSData *data1 = UIImagePNGRepresentation(image1);
    NSData *data2 = UIImagePNGRepresentation(image2);

    return [data1 isEqual:data2];
}

3 个答案:

答案 0 :(得分:1)

如果你想看看2张图像是否像素相同,那应该很容易。

将图像保存为JPEG可能会导致问题,因为JPEG格式是有损格式。

正如其他人所建议的那样,首先要确保两个图像的高度和宽度相匹配。如果没有,停止。图像不同。

如果匹配,请使用UIImagePNGRepresentation()之类的函数将图像转换为无损数据格式。然后在你回来的NSData对象上使用isEqual。

如果你想检查图像是否相同(比如同一场景的2张照片),你手上的问题会更加严重。您可能不得不求助于像OpenCV这样的软件包来比较图像。

编辑:我不知道UIImage是否有一个isEqual的自定义实现,你可以使用它来比较两个图像。我先试试。

查看文档,UIImage也符合NSCoding,因此您可以使用archivedDataWithRootObject将图像转换为数据。这可能比编码它们的PNG更快。

最后,你可以得到一个指向图像的底层CGImage对象的指针,获取它们的数据提供者,并以这种方式比较它们的字节流。

答案 1 :(得分:1)

检查相同尺寸,然后检查哈希值......

答案 2 :(得分:1)

步骤1,缩小尺寸。 第2步,简化颜色。 第3步,是计算平均值。 步骤4,比较像素灰度。 步骤5,计算哈希值。

以下一步一步: 第一步是缩小尺寸。将大小减小到8x8,总共64像素。这一步的作用是去除图片的细节,只保留结构,光线等基本信息,放弃不同大小,图片比例的差异。

-(UIImage * ) OriginImage:(UIImage **)image scaleToSize:(CGSize)size
{
    UIGraphicsBeginImageContext(size);

    [image drawInRect:CGRectMake(0, 0, size.width, size.height)];

    UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return scaledImage;
}

第2步,简化颜色。将图片缩小到64灰度。也就是说,所有像素总共有64种颜色。

    -(UIImage*)getGrayImage:(UIImage*)sourceImage
{

    int width = sourceImage.size.width;
    int height = sourceImage.size.height;
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
    CGContextRef context = CGBitmapContextCreate (nil,width,height,8,0,colorSpace,kCGImageAlphaNone);
    CGColorSpaceRelease(colorSpace);
    if (context == NULL) {
        return nil;
    }
    CGContextDrawImage(context,CGRectMake(0, 0, width, height), sourceImage.CGImage);
    UIImage *grayImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
    CGContextRelease(context);
    return grayImage;
}

步骤3是计算平均值。计算所有64个像素的灰度。

    -(unsigned char*) grayscalePixels:(UIImage *) image
{
    // The amount of bits per pixel, in this case we are doing grayscale so 1 byte = 8 bits
#define BITS_PER_PIXEL 8
    // The amount of bits per component, in this it is the same as the bitsPerPixel because only 1 byte represents a pixel
#define BITS_PER_COMPONENT (BITS_PER_PIXEL)
    // The amount of bytes per pixel, not really sure why it asks for this as well but it's basically the bitsPerPixel divided by the bits per component (making 1 in this case)
#define BYTES_PER_PIXEL (BITS_PER_PIXEL/BITS_PER_COMPONENT)

    // Define the colour space (in this case it's gray)
    CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceGray();

    // Find out the number of bytes per row (it's just the width times the number of bytes per pixel)
    size_t bytesPerRow = image.size.width * BYTES_PER_PIXEL;
    // Allocate the appropriate amount of memory to hold the bitmap context
    unsigned char* bitmapData = (unsigned char*) malloc(bytesPerRow*image.size.height);

    // Create the bitmap context, we set the alpha to none here to tell the bitmap we don't care about alpha values
    CGContextRef context = CGBitmapContextCreate(bitmapData,image.size.width,image.size.height,BITS_PER_COMPONENT,bytesPerRow,colourSpace,kCGImageAlphaNone);

    // We are done with the colour space now so no point in keeping it around
    CGColorSpaceRelease(colourSpace);

    // Create a CGRect to define the amount of pixels we want
    CGRect rect = CGRectMake(0.0,0.0,image.size.width,image.size.height);
    // Draw the bitmap context using the rectangle we just created as a bounds and the Core Graphics Image as the image source
    CGContextDrawImage(context,rect,image.CGImage);
    // Obtain the pixel data from the bitmap context
    unsigned char* pixelData = (unsigned char*)CGBitmapContextGetData(context);

    // Release the bitmap context because we are done using it
    CGContextRelease(context);

    return pixelData;
#undef BITS_PER_PIXEL
#undef BITS_PER_COMPONENT
}

返回0101字符串

    -(NSString *) myHash:(UIImage *) img
{
    unsigned char* pixelData = [self grayscalePixels:img];

    int total = 0;
    int ave = 0;
    for (int i = 0; i < img.size.height; i++) {
        for (int j = 0; j < img.size.width; j++) {
            total += (int)pixelData[(i*((int)img.size.width))+j];
        }
    }
    ave = total/64;
    NSMutableString *result = [[NSMutableString alloc] init];
    for (int i = 0; i < img.size.height; i++) {
        for (int j = 0; j < img.size.width; j++) {
            int a = (int)pixelData[(i*((int)img.size.width))+j];
            if(a >= ave)
            {
                [result appendString:@"1"];
            }
            else
            {
                [result appendString:@"0"];
            }
        }
    }
    return result;
}

步骤5,计算哈希值。将以前的比较结果,它一起构成一个64位整数,这是这张图片的指纹。组合的顺序并不重要,只要所有图片都保证在线上使用相同的顺序。以后获取指纹,您可以比较不同的图片,看看64中的多少位不一样。从理论上讲,这相当于计算“汉明距离” (汉明距离)。如果不是相同的数据位不超过5,则意味着两个图像非常相似;如果超过10,则表明这是两张不同的图片。

0111111011110011111100111110000111000001100000011110001101111010 1111111111110001111000011110000111000001100000011110000111111011