如何获取视网膜的bitmapcontext并在其中找到特定的像素?

时间:2014-05-02 18:41:56

标签: ios c xcode core-graphics

我试图解决这个问题一段时间没有成功。我正在制作一个从给定点返回颜色的应用程序。当我输入一个点时,它有时会工作,但它不会在正确的像素上并且完全随机。所以我的想法是字节顺序错误。快速计算显示,使用视网膜iPhone分配了2908160个字节,用于640 * 1136大小的视图。然而,当两个点都在320和568时,偏移中有更多位,这表明这里存在错误。那有什么方法可以解决这个问题吗?

int offset =(4 * scaleFactor)*((w * round(point.y))+ round(point.x));

我也想知道如何调用免费(数据)。当我尝试调用它时,应用程序将崩溃并返回一条错误消息,其中包含“您无法释放未使用malloc分配的数据”。

我会很高兴得到一个答案,告诉我如何解决这个问题,因为这让我很疯狂。

我希望你具体看看:

 unsigned char* data = CGBitmapContextGetData (cacheContext);
    if (data != NULL) {
        //offset locates the pixel in the data from x,y.
        //4 for 4 bytes of data per pixel, w is width of one row of data.
        int offset = (4*scaleFactor)*((w*round(point.y))+round(point.x)); 

       //  if (data) { free(data); }
    data = nil;

完整代码:

- (BOOL) initContext:(CGSize)size {

    float scaleFactor = [[UIScreen mainScreen] scale];
    // scaleFactor = 2;

    int bitmapByteCount;
    int bitmapBytesPerRow;

    // Declare the number of bytes per row. Each pixel in the bitmap in this
    // example is represented by 4 bytes; 8 bits each of red, green, blue, and
    // alpha.
    bitmapBytesPerRow = (size.width * 4);
    bitmapByteCount = (bitmapBytesPerRow * size.height)*scaleFactor*scaleFactor;

    // Allocate memory for image data. This is the destination in memory
    // where any drawing to the bitmap context will be rendered.
    cacheBitmap = malloc( bitmapByteCount );
    if (cacheBitmap == NULL){
        return NO;
    }

    CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrderDefault;
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    cacheContext = CGBitmapContextCreate (cacheBitmap, size.width*scaleFactor, size.height *scaleFactor, 8, bitmapBytesPerRow*scaleFactor, colorSpace, bitmapInfo);

    CGContextScaleCTM(cacheContext, scaleFactor, scaleFactor);
       CGColorSpaceRelease(colorSpace);

  CGContextSetRGBFillColor(cacheContext, 1, 0, 0, 0.0);
   CGContextFillRect(cacheContext, (CGRect){CGPointZero, CGSizeMake(size.height*scaleFactor, size.width*scaleFactor)});

    return YES;
}

- (UIColor*) getPixelColorForContext:(CGContextRef)cgctx size:(CGSize)ctxSize atLocation:(CGPoint)point
{
    UIColor* color = nil;
    // Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
   // if (cgctx == NULL) { return nil; /* error */ }

        float scaleFactor = [[UIScreen mainScreen] scale];
    //scaleFactor = 2;

    NSLog(@"%.0f",framsize.width);
    size_t w = framsize.width*scaleFactor*scaleFactor;

    // Now we can get a pointer to the image data associated with the bitmap
    // context.
    unsigned char* data = CGBitmapContextGetData (cacheContext);
    if (data != NULL) {
        //offset locates the pixel in the data from x,y.
        //4 for 4 bytes of data per pixel, w is width of one row of data.
        int offset = (4*scaleFactor)*((w*round(point.y))+round(point.x));
        int alpha =  data[offset];
        int red = data[offset+1];
        int green = data[offset+2];
        int blue = data[offset+3];
        NSLog(@"offset: %i colors: RGB A %i %i %i  %i",offset,red,green,blue,alpha);
        color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
    }

    // Free image data memory for the context
    //  if (data) { free(data); }
    data = nil;

    return color;
}

0 个答案:

没有答案