如何获取UIImage中特定像素的RGB值?
答案 0 :(得分:89)
尝试这个非常简单的代码:
我曾经在我的迷宫游戏中检测到墙壁(我需要的唯一信息是alpha通道,但我包含了代码以获取其他颜色):
- (BOOL)isWallPixel:(UIImage *)image xCoordinate:(int)x yCoordinate:(int)y {
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);
int pixelInfo = ((image.size.width * y) + x ) * 4; // The image is png
//UInt8 red = data[pixelInfo]; // If you need this info, enable it
//UInt8 green = data[(pixelInfo + 1)]; // If you need this info, enable it
//UInt8 blue = data[pixelInfo + 2]; // If you need this info, enable it
UInt8 alpha = data[pixelInfo + 3]; // I need only this info for my maze game
CFRelease(pixelData);
//UIColor* color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f]; // The pixel color info
if (alpha) return YES;
else return NO;
}
答案 1 :(得分:17)
<强> OnTouch 强>
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[touches allObjects] objectAtIndex:0];
CGPoint point1 = [touch locationInView:self.view];
touch = [[event allTouches] anyObject];
if ([touch view] == imgZoneWheel)
{
CGPoint location = [touch locationInView:imgZoneWheel];
[self getPixelColorAtLocation:location];
if(alpha==255)
{
NSLog(@"In Image Touch view alpha %d",alpha);
[self translateCurrentTouchPoint:point1.x :point1.y];
[imgZoneWheel setImage:[UIImage imageNamed:[NSString stringWithFormat:@"blue%d.png",GrndFild]]];
}
}
}
- (UIColor*) getPixelColorAtLocation:(CGPoint)point
{
UIColor* color = nil;
CGImageRef inImage;
inImage = imgZoneWheel.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(point.y))+round(point.x));
alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
//CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
- (CGContextRef) createARGBBitmapContextFromImage:(CGImageRef)inImage
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
答案 2 :(得分:11)
您无法直接访问原始数据,但通过获取此图像的CGImage,您可以访问它。这里是另一个问题的链接,可以回答您的问题以及您可能有关于详细图像处理的其他问题:CGImage
答案 3 :(得分:9)
这是一种在UI图像中获取像素颜色的通用方法,以Minas Petterson的答案为基础:
- (UIColor*)pixelColorInImage:(UIImage*)image atX:(int)x atY:(int)y {
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);
int pixelInfo = ((image.size.width * y) + x ) * 4; // 4 bytes per pixel
UInt8 red = data[pixelInfo + 0];
UInt8 green = data[pixelInfo + 1];
UInt8 blue = data[pixelInfo + 2];
UInt8 alpha = data[pixelInfo + 3];
CFRelease(pixelData);
return [UIColor colorWithRed:red /255.0f
green:green/255.0f
blue:blue /255.0f
alpha:alpha/255.0f];
}
注意可以交换X和Y;此函数直接访问底层位图,不考虑可能属于UIImage的旋转。
答案 4 :(得分:8)
一些基于Minas&#39;的Swift代码回答。我扩展了UIImage,使其可以在任何地方访问,并且我添加了一些简单的逻辑来根据像素步幅(1,3或4)猜测图像格式
斯威夫特3:
public extension UIImage {
func getPixelColor(point: CGPoint) -> UIColor {
guard let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage)) else {
return UIColor.clearColor()
}
let data = CFDataGetBytePtr(pixelData)
let x = Int(point.x)
let y = Int(point.y)
let index = Int(self.size.width) * y + x
let expectedLengthA = Int(self.size.width * self.size.height)
let expectedLengthGrayScale = 2 * expectedLengthA
let expectedLengthRGB = 3 * expectedLengthA
let expectedLengthRGBA = 4 * expectedLengthA
let numBytes = CFDataGetLength(pixelData)
switch numBytes {
case expectedLengthA:
return UIColor(red: 0, green: 0, blue: 0, alpha: CGFloat(data[index])/255.0)
case expectedLengthGrayScale:
return UIColor(white: CGFloat(data[2 * index]) / 255.0, alpha: CGFloat(data[2 * index + 1]) / 255.0)
case expectedLengthRGB:
return UIColor(red: CGFloat(data[3*index])/255.0, green: CGFloat(data[3*index+1])/255.0, blue: CGFloat(data[3*index+2])/255.0, alpha: 1.0)
case expectedLengthRGBA:
return UIColor(red: CGFloat(data[4*index])/255.0, green: CGFloat(data[4*index+1])/255.0, blue: CGFloat(data[4*index+2])/255.0, alpha: CGFloat(data[4*index+3])/255.0)
default:
// unsupported format
return UIColor.clearColor()
}
}
}
针对Swift 4进行了更新:
func getPixelColor(_ image:UIImage, _ point: CGPoint) -> UIColor {
let cgImage : CGImage = image.cgImage!
guard let pixelData = CGDataProvider(data: (cgImage.dataProvider?.data)!)?.data else {
return UIColor.clear
}
let data = CFDataGetBytePtr(pixelData)!
let x = Int(point.x)
let y = Int(point.y)
let index = Int(image.size.width) * y + x
let expectedLengthA = Int(image.size.width * image.size.height)
let expectedLengthGrayScale = 2 * expectedLengthA
let expectedLengthRGB = 3 * expectedLengthA
let expectedLengthRGBA = 4 * expectedLengthA
let numBytes = CFDataGetLength(pixelData)
switch numBytes {
case expectedLengthA:
return UIColor(red: 0, green: 0, blue: 0, alpha: CGFloat(data[index])/255.0)
case expectedLengthGrayScale:
return UIColor(white: CGFloat(data[2 * index]) / 255.0, alpha: CGFloat(data[2 * index + 1]) / 255.0)
case expectedLengthRGB:
return UIColor(red: CGFloat(data[3*index])/255.0, green: CGFloat(data[3*index+1])/255.0, blue: CGFloat(data[3*index+2])/255.0, alpha: 1.0)
case expectedLengthRGBA:
return UIColor(red: CGFloat(data[4*index])/255.0, green: CGFloat(data[4*index+1])/255.0, blue: CGFloat(data[4*index+2])/255.0, alpha: CGFloat(data[4*index+3])/255.0)
default:
// unsupported format
return UIColor.clear
}
}
答案 5 :(得分:7)
- (UIColor *)colorAtPixel:(CGPoint)point inImage:(UIImage *)image {
if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), point)) {
return nil;
}
// Create a 1x1 pixel byte array and bitmap context to draw the pixel into.
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = image.CGImage;
NSUInteger width = image.size.width;
NSUInteger height = image.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * 1;
NSUInteger bitsPerComponent = 8;
unsigned char pixelData[4] = { 0, 0, 0, 0 };
CGContextRef context = CGBitmapContextCreate(pixelData, 1, 1, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the pixel we are interested in onto the bitmap context
CGContextTranslateCTM(context, -pointX, pointY-(CGFloat)height);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
CGContextRelease(context);
// Convert color values [0..255] to floats [0.0..1.0]
CGFloat red = (CGFloat)pixelData[0] / 255.0f;
CGFloat green = (CGFloat)pixelData[1] / 255.0f;
CGFloat blue = (CGFloat)pixelData[2] / 255.0f;
CGFloat alpha = (CGFloat)pixelData[3] / 255.0f;
return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}
答案 6 :(得分:4)
Swift 5版本
此处给出的答案过时或不正确,因为它们没有考虑以下因素:
image.size.width
/ image.size.height
返回的点大小不同。UIView.drawHierarchy(in:afterScreenUpdates:)
方法可以生成BGRA图像。CGImage
使用的内存优化,像素行的大小(以字节为单位)可以大于像素宽度乘以4的大小。下面的代码将提供通用的Swift 5解决方案,以针对所有此类特殊情况获取像素的UIColor
。该代码针对可用性和清晰度进行了优化,对于性能而言,不是进行了优化。
public extension UIImage {
var pixelWidth: Int {
return cgImage?.width ?? 0
}
var pixelHeight: Int {
return cgImage?.height ?? 0
}
func pixelColor(x: Int, y: Int) -> UIColor {
assert(
0..<pixelWidth ~= x && 0..<pixelHeight ~= y,
"Pixel coordinates are out of bounds")
guard
let cgImage = cgImage,
let data = cgImage.dataProvider?.data,
let dataPtr = CFDataGetBytePtr(data),
let colorSpaceModel = cgImage.colorSpace?.model,
let componentLayout = cgImage.bitmapInfo.componentLayout
else {
assertionFailure("Could not get a pixel of an image")
return .clear
}
assert(
colorSpaceModel == .rgb,
"The only supported color space model is RGB")
assert(
cgImage.bitsPerPixel == 32 || cgImage.bitsPerPixel == 24,
"A pixel is expected to be either 4 or 3 bytes in size")
let bytesPerRow = cgImage.bytesPerRow
let bytesPerPixel = cgImage.bitsPerPixel/8
let pixelOffset = y*bytesPerRow + x*bytesPerPixel
if componentLayout.count == 4 {
let components = (
dataPtr[pixelOffset + 0],
dataPtr[pixelOffset + 1],
dataPtr[pixelOffset + 2],
dataPtr[pixelOffset + 3]
)
var alpha: UInt8 = 0
var red: UInt8 = 0
var green: UInt8 = 0
var blue: UInt8 = 0
switch componentLayout {
case .bgra:
alpha = components.3
red = components.2
green = components.1
blue = components.0
case .abgr:
alpha = components.0
red = components.3
green = components.2
blue = components.1
case .argb:
alpha = components.0
red = components.1
green = components.2
blue = components.3
case .rgba:
alpha = components.3
red = components.0
green = components.1
blue = components.2
default:
return .clear
}
// If chroma components are premultiplied by alpha and the alpha is `0`,
// keep the chroma components to their current values.
if cgImage.bitmapInfo.chromaIsPremultipliedByAlpha && alpha != 0 {
let invUnitAlpha = 255/CGFloat(alpha)
red = UInt8((CGFloat(red)*invUnitAlpha).rounded())
green = UInt8((CGFloat(green)*invUnitAlpha).rounded())
blue = UInt8((CGFloat(blue)*invUnitAlpha).rounded())
}
return .init(red: red, green: green, blue: blue, alpha: alpha)
} else if componentLayout.count == 3 {
let components = (
dataPtr[pixelOffset + 0],
dataPtr[pixelOffset + 1],
dataPtr[pixelOffset + 2]
)
var red: UInt8 = 0
var green: UInt8 = 0
var blue: UInt8 = 0
switch componentLayout {
case .bgr:
red = components.2
green = components.1
blue = components.0
case .rgb:
red = components.0
green = components.1
blue = components.2
default:
return .clear
}
return .init(red: red, green: green, blue: blue, alpha: UInt8(255))
} else {
assertionFailure("Unsupported number of pixel components")
return .clear
}
}
}
public extension UIColor {
convenience init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) {
self.init(
red: CGFloat(red)/255,
green: CGFloat(green)/255,
blue: CGFloat(blue)/255,
alpha: CGFloat(alpha)/255)
}
}
public extension CGBitmapInfo {
enum ComponentLayout {
case bgra
case abgr
case argb
case rgba
case bgr
case rgb
var count: Int {
switch self {
case .bgr, .rgb: return 3
default: return 4
}
}
}
var componentLayout: ComponentLayout? {
guard let alphaInfo = CGImageAlphaInfo(rawValue: rawValue & Self.alphaInfoMask.rawValue) else { return nil }
let isLittleEndian = contains(.byteOrder32Little)
if alphaInfo == .none {
return isLittleEndian ? .bgr : .rgb
}
let alphaIsFirst = alphaInfo == .premultipliedFirst || alphaInfo == .first || alphaInfo == .noneSkipFirst
if isLittleEndian {
return alphaIsFirst ? .bgra : .abgr
} else {
return alphaIsFirst ? .argb : .rgba
}
}
var chromaIsPremultipliedByAlpha: Bool {
let alphaInfo = CGImageAlphaInfo(rawValue: rawValue & Self.alphaInfoMask.rawValue)
return alphaInfo == .premultipliedFirst || alphaInfo == .premultipliedLast
}
}
答案 7 :(得分:0)
首先创建并附加点击手势识别器允许用户交互:
UITapGestureRecognizer * tapRecognizer = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(tapGesture:)];
[self.label addGestureRecognizer:tapRecognizer];
self.label.userInteractionEnabled = YES;
现在实施-tapGesture:
- (void)tapGesture:(UITapGestureRecognizer *)recognizer
{
CGPoint point = [recognizer locationInView:self.label];
UIGraphicsBeginImageContext(self.label.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.label.layer renderInContext:context];
int bpr = CGBitmapContextGetBytesPerRow(context);
unsigned char * data = CGBitmapContextGetData(context);
if (data != NULL)
{
int offset = bpr*round(point.y) + 4*round(point.x);
int blue = data[offset+0];
int green = data[offset+1];
int red = data[offset+2];
int alpha = data[offset+3];
NSLog(@"%d %d %d %d", alpha, red, green, blue);
if (alpha == 0)
{
// Here is tap out of text
}
else
{
// Here is tap right into text
}
}
UIGraphicsEndImageContext();
}
这将适用于具有透明背景的UILabel,如果这不是您想要的,您可以将alpha,red,green,blue与self.label.backgroundColor
进行比较......
答案 8 :(得分:0)
米纳斯回答的快速版本
extension CGImage {
func pixel(x: Int, y: Int) -> (r: Int, g: Int, b: Int, a: Int)? { // swiftlint:disable:this large_tuple
guard let pixelData = dataProvider?.data,
let data = CFDataGetBytePtr(pixelData) else { return nil }
let pixelInfo = ((width * y) + x ) * 4
let red = Int(data[pixelInfo]) // If you need this info, enable it
let green = Int(data[(pixelInfo + 1)]) // If you need this info, enable it
let blue = Int(data[pixelInfo + 2]) // If you need this info, enable it
let alpha = Int(data[pixelInfo + 3]) // I need only this info for my maze game
return (red, green, blue, alpha)
}
}