从IOS中的Imageview检测黑色的最左边的点

时间:2014-05-26 09:02:45

标签: ios objective-c uiimageview position core-graphics

我很难找到最左边的点/帧/位置来检测UIImageView。我试图从UIImageView检测黑色像素,如下所示:

-(void)Black_findcolor
{
    UIImage *image = imgview.image;//imgview from i want to detect the most black color pixels.

    CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
    int myWidth = CGImageGetWidth(image.CGImage);
    int myHeight = CGImageGetHeight(image.CGImage);
    const UInt8 *pixels = CFDataGetBytePtr(pixelData);
    UInt8 blackThreshold = 10 ;
  //  UInt8 alphaThreshold = 100;
    int bytesPerPixel_ = 4;
    CGFloat Xvalue,Yvalue;
    int x = 0,y=0;
    for( x = 0; x < myWidth; x++)
    {
        for( y = 0; y < myHeight; y++)
        {
            int pixelStartIndex = (x + (y * myWidth)) * bytesPerPixel_;
            UInt8 alphaVal = pixels[pixelStartIndex];
            UInt8 redVal = pixels[pixelStartIndex + 1];
            UInt8 greenVal = pixels[pixelStartIndex + 2];
            UInt8 blueVal = pixels[pixelStartIndex + 3];

            if(redVal < blackThreshold || blueVal < blackThreshold || greenVal < blackThreshold || alphaVal < blackThreshold)
            {
                NSLog(@"x =%d, y = %d", x, y);
           }
        }
    }
}

从上面我得到不同的x和y像素值,但我想将其作为点。所以我必须在最左边的黑色点UIImageView上设置引脚。

imgPin1.center=CGPointMake(x, y);//i tried but didnt get succeed in it // imgpin1 is Global imageview.

那么任何人都可以给我一些想法或建议如何设置imgpin1最左边黑色像素的边框或位置?

如果需要任何其他信息,请告诉我。

提前致谢。

1 个答案:

答案 0 :(得分:1)

<强> EDITED 请查看我编辑的答案

    //You need to get resized image from your imageView in case of sizeToFit etc.
    float oldWidth = yourImageView.image.size.width;
    float scaleFactor = yourImageView.frame.size.width / oldWidth;

    float newHeight = yourImageView.image.size.height * scaleFactor;
    float newWidth = oldWidth * scaleFactor;

    UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
    [yourImageView.image drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();


    CGSize size = newImage.size;
    int width = size.width;
    int height = size.height;
    int xCoord = width, yCoord = height;
    UInt8 blackThreshold = 10;


    // raw data will be assigned into this array
    uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));

    // clear the pixels so any transparency is preserved
    memset(pixels, 0, width * height * sizeof(uint32_t));

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // create a context with RGBA pixels
    CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
                                                 kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);

    // fill in the pixels array
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), [yourImageView.image CGImage]);

    for(int y = 0; y < height; y++) {
        for(int x = 0; x < width; x++) {
            uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];

            // convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
            uint32_t gray = 0.3 * rgbaPixel[1] + 0.59 * rgbaPixel[2] + 0.11 * rgbaPixel[3];

            if (gray < blackThreshold)
            {
                if (x<xCoord)
                {
                    xCoord = x;
                    yCoord = y;
                    NSLog(@"X:%d Y:%d",xCoord,yCoord);
                }
            }

        }
    }

    // we're done with the context and pixels
    CGContextRelease(context);
    free(pixels);

    imgPin1.center=CGPointMake(xCoord, yCoord);

请注意,如果您使用原始图片迭代所有像素并尝试将图钉显示在UIImageView上,您将获得图像原件中像素的XY坐标大小,与UIImage

中显示的UIImageView的实际坐标不符