我正在开发一个在viewcontoller中有多个UIImageViews的iPad应用程序。每个图像都有一些透明的部分。当用户点击我要测试的图像时,如果他点击的图像区域不透明,那么我想做一些动作
搜索后我得出的结论是我必须访问图像的原始数据并检查用户点击的点的alpha值
我使用了here中找到的解决方案,它帮助了很多。我修改了代码,以便如果用户点击的点是透明的(alpha< 1),那么prent 0 else print 1.但是,结果在运行时不准确。我有时会得到0,点击点不透明和viseversa。我认为 byteIndex 值存在问题我不确定它在用户点击时返回颜色数据
这是我的代码
CGPoint touchPoint;
- (void)viewDidLoad
{
[super viewDidLoad];
[logo addGestureRecognizer:[[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(handleSingleTap:)]];
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[event allTouches] anyObject];
touchPoint = [touch locationInView:self.view];
}
- (void)handleSingleTap:(UITapGestureRecognizer *)recognizer
{
int x = touchPoint.x;
int y = touchPoint.y;
[self getRGBAsFromImage:img atX:x andY:y];
}
- (void)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy {
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
if (alpha < 1) {
NSLog(@"0");
// here I should add the action I want
}
else NSLog(@"1");
free(rawData);
}
谢谢你的推荐
答案 0 :(得分:2)
该方法很糟糕,因为它每次都会绘制完整的图像,并且还依赖于确切的字节布局。
我建议只绘制1x1上下文并获得
的alpha- (void)handleSingleTap:(UITapGestureRecognizer *)recognizer
{
int x = touchPoint.x;
int y = touchPoint.y;
CGFloat alpha = [self getAlphaFromImage:img atX:x andY:y];
if(alpha<1)
…
}
- (CGFloat)getAlphaFromImage:(UIImage*)image atX:(NSInteger)xx andY:(NSInteger)yy {
// Cancel if point is outside image coordinates
if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), CGPointMake(xx,yy))) {
return 0;
}
// Create a 1x1 pixel byte array and bitmap context to draw the pixel into.
// Reference: http://stackoverflow.com/questions/1042830/retrieving-a-pixel-alpha-value-for-a-uiimage
NSInteger pointX = xx;
NSInteger pointY = yy;
CGImageRef cgImage = self.CGImage;
NSUInteger width = self.size.width;
NSUInteger height = self.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * 1;
NSUInteger bitsPerComponent = 8;
unsigned char pixelData[4] = { 0, 0, 0, 0 };
CGContextRef context = CGBitmapContextCreate(pixelData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the pixel we are interested in onto the bitmap context
CGContextTranslateCTM(context, -pointX, pointY-(CGFloat)height);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
CGContextRelease(context);
CGFloat alpha = (CGFloat)pixelData[3] / 255.0f;
return alpha;
}