我想知道如何在iPhone上扫描图像并分析每个像素的RGB值,从而最终确定整个图像的平均RGB。如果有人能够把我推向正确的方向,我将不胜感激。我是图像分析的新手,不知道从哪里开始,或者iOS 5 API中是否包含这样的内容。
答案 0 :(得分:3)
只需粘贴即可,我会在触摸时检测颜色。
- (void) touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event {
if (self.view.hidden==YES) {
//color wheel is hidden, so don't handle this as a color wheel event.
[[self nextResponder] touchesEnded:touches withEvent:event];
return;
}
UITouch* touch = [touches anyObject];
CGPoint point = [touch locationInView:self.view]; //where image was tapped
UIColor * lastColor = [self getPixelColorAtLocation:point];
NSLog(@"color %@",lastColor);
UIImageView *lbl=[[UIImageView alloc]initWithFrame:CGRectMake(0, 0, 100, 100)];
lbl.layer.cornerRadius=50;
[imageView addSubview:lbl];
lbl.backgroundColor=lastColor;
lbl.center=CGPointMake(stillImageFilter.center.x*320, (stillImageFilter.center.y*320)-125);
NSLog(@"stillImageCenter = %f,%f",stillImageFilter.center.x,stillImageFilter.center.y);}
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
CGImageRef inImage = imageView.image.CGImage;
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
CGContextDrawImage(cgctx, rect, inImage);
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
int offset = 4*((w*round(point.y))+round(point.x));
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
NSLog(@"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
CGContextRelease(cgctx);
if (data) { free(data); }
return color;
}
- (CGContextRef) createARGBBitmapContextFromImage:(CGImageRef) inImage {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
CGColorSpaceRelease( colorSpace );
return context;
}
答案 1 :(得分:0)
查看Camera Programming Topics for iOS - Taking Pictures and Movies这将获取您应用中的图片。
之后看看:how-to-get-the-rgb-values-for-a-pixel-on-an-image-on-the-iphone
答案 2 :(得分:0)
从UIImage获取CGImage可以为您提供此数据
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);
int pixelInfo = ((image.size.width * y) + x ) * 4; // The image is png
UInt8 red = data[pixelInfo];
UInt8 green = data[(pixelInfo + 1)];
UInt8 blue = data[pixelInfo + 2];
UInt8 alpha = data[pixelInfo + 3];
CFRelease(pixelData);
更多信息: Getting pixel data from UIImageView -- works on simulator, not device