我有两张图片(img1和img2)。第一个是固定的,第二个是可拖动的。 img1有一个黑色轮廓,其余像素是白色。
当我移动img2时,它可以与第一个相交。在这里,我应该改变非相交部分的像素。 首先,我使用一种方法在特定点获取像素。 然后,我创建了两个方法来获取两个图像帧的点坐标并将它们保存到两个表中。 这两种方法的目的是在拖动img2时比较两个图像的点坐标。 我的意思是当我移动img2时,我选择一个位置,这是一个点,我将它与img1包含的所有点进行比较。如果这个点没有包含在img1中,我会改变它的像素颜色。 之后,我在两个表之间进行比较,得到不常见的点并改变其颜色像素。我真的很挣扎。这是我的代码片段:
-(NSMutableArray *) getPixelColorAtLocation:(CGPoint)point
{
unsigned char pixel[4] = {0};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 4, colorSpace, kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(context, -point.x, -point.y);
[self.view.layer renderInContext:context];
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
NSMutableArray *array = [[NSMutableArray alloc]init];
[array addObject:[NSString stringWithFormat:@"%d",pixel[0]]];
[array addObject:[NSString stringWithFormat:@"%d",pixel[1]]];
[array addObject:[NSString stringWithFormat:@"%d",pixel[2]]];
[array addObject:[NSString stringWithFormat:@"%d",pixel[3]]];
return array;
}
-(NSMutableArray *)getimg1Coordinate:(CGPoint)point{
NSMutableArray *array = [[NSMutableArray alloc]init];
NSMutableArray *coordinates = [[NSMutableArray alloc]init];
int c,b;
for ( c=v.frame.origin.x;c <v.frame.origin.x+v.frame.size.width;c++)
{
for(b = v.frame.origin.y ;b<v.frame.origin.y+v.frame.size.height;b++)
{
array =[self getPixelColorAtLocation:point];
int red = [ [array objectAtIndex:0] intValue];
int green = [ [array objectAtIndex:1] intValue];
int blue = [ [array objectAtIndex:2] intValue];
int alpha = [ [array objectAtIndex:3] intValue];
if( red ==255)
{
if( green == 255)
{
if( blue ==255)
{
if (alpha == 255)
{
[coordinates addObject:[NSString stringWithFormat:@"%f",point.x]];
[coordinates addObject:[NSString stringWithFormat:@"%f",point.y]];
}
}
}
}
}
}
return coordinates;
}
-(NSMutableArray *)getImgCoordinate :(CGPoint)point{
NSMutableArray *coordinates = [[NSMutableArray alloc]init];
int c,b;
for ( c=imgView.frame.origin.x;c <imgView.frame.origin.x+imgView.frame.size.width;c++)
{
for(b = imgView.frame.origin.y ;b<imgView.frame.origin.y+imgView.frame.size.height;b++)
{
[coordinates addObject:[NSString stringWithFormat:@"%f",point.x]];
[coordinates addObject:[NSString stringWithFormat:@"%f",point.y]];
}
}
return coordinates;
}
我创建了上面两种方法来存储所有点坐标,以获得非常见值。
- (void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [[event allTouches] anyObject];
CGPoint location = [touch locationInView:[self view]];
// move the image view
[imgView setCenter:location];
NSMutableArray *coordImg1 = [[NSMutableArray alloc]init];
NSMutableArray *coordImg2 = [[NSMutableArray alloc]init];
coordImg1 = [self getImg1Coordinate:location];
coordImg2 = [ self getImg2Coordinate:location];
for (id obj in coordImg) {//each obj in arr2
if ([coordBubble containsObject:obj])
{//if arr1 has the same obj(which is from arr2)
}
else {
[self changePixels];
}
}
在这里我遇到问题我怎么才能得到我应该改变的像素。 我不确定我在做什么是最终获得我需要的东西。
- (void)changePixels {
CGImageRef imageRef = [img CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace,kCGImageAlphaPremultipliedFirst| kCGBitmapByteOrder32Big);
float red = 0.0,green = 0.0,blue = 0.0,alpha = 0.0;
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
for(int xx = 0; xx<width; xx++) {
for(int yy = 0; yy<height; yy++) {
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
red = rawData[byteIndex];
green = rawData[byteIndex + 1];
blue = rawData[byteIndex + 2];
alpha = rawData[byteIndex + 3];
if( alpha ==255.000000)
{
rawData[byteIndex] = 20;
rawData[byteIndex+1] = green;
rawData[byteIndex+2] = 140;
rawData[byteIndex+3] = 60;
}
}
}
CGContextRef contextref = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
UIImage *result = [UIImage imageWithCGImage:CGBitmapContextCreateImage(contextref)];
CGContextDrawImage(contextref, CGRectMake(0, 0, width, height), imageRef);
UIImageView *imgV= [[UIImageView alloc]initWithFrame:CGRectMake(0,0,100,100)];
[imgV setImage:result];
[self.view addSubview:imgV];
}
我现在正在努力更优化我的代码但是目前我只能得到每个图像的合作但我还不能改变非常见部分的颜色像素。 请问您应该如何处理?
答案 0 :(得分:3)
您可以使用更高级别的UIKit和核心图形功能,而不是尝试自己逐个像素地执行此操作。考虑下面的最终图像,它是图像和圆形路径的组合,我们用它来(a)屏蔽图像和(b)描边路径:
在那种情况下,我可能会:
以缩小的alpha绘制图像,因此我们可以在后台看到它。
添加蒙版,并以完整的alpha重绘蒙版部分。
如果你需要看掩码的轮廓,也可以画出来。
这可以通过UIView
子视图实现。首先,定义一些属性:
#import <UIKit/UIKit.h>
@interface MaskedImageView : UIView
@property (nonatomic, strong) UIBezierPath *path;
@property (nonatomic, strong) UIImage *image;
@property (nonatomic) CGRect imageFrame;
@end
然后,您可以实施满足您需求的drawRect
:
#import "MaskedImageView.h"
@implementation MaskedImageView
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
// draw full image at 50%
[self.image drawInRect:self.imageFrame blendMode:kCGBlendModeNormal alpha:0.5];
// draw clipped image at 100%
CGContextSaveGState(context);
if (self.path) {
CGContextAddPath(context, [self.path CGPath]);
CGContextClip(context);
}
[self.image drawInRect:self.imageFrame];
CGContextRestoreGState(context);
// now stroke path
[[UIColor blackColor] setStroke];
[self.path stroke];
}
- (void)setPath:(UIBezierPath *)path
{
_path = path;
[self setNeedsDisplay];
}
- (void)setImage:(UIImage *)image
{
_image = image;
[self setNeedsDisplay];
}
- (void)setImageFrame:(CGRect)imageFrame
{
_imageFrame = imageFrame;
[self setNeedsDisplay];
}
@end
然后,您可以让视图控制器创建此子视图,如下所示:
MaskedImageView *view = [[MaskedImageView alloc] initWithFrame:self.view.bounds];
[self.view addSubview:view];
view.backgroundColor = [UIColor whiteColor];
UIImage *image = [UIImage imageNamed:@"kitten.jpeg"];
view.image = image;
view.imageFrame = CGRectMake(10, 100, image.size.width, image.size.height);
UIBezierPath *circle = [UIBezierPath bezierPath];
[circle addArcWithCenter:CGPointMake(100,150) radius:80 startAngle:0 endAngle:M_PI * 2.0 clockwise:YES];
circle.lineWidth = 3.0;
view.path = circle;
如果你想把它渲染成UIImage
,那就是标准技术:
UIGraphicsBeginImageContextWithOptions(view.bounds.size, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
如果您想保存该图片:
NSData *data = UIImagePNGRepresentation(newImage);
NSString *documentsPath = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES)[0];
NSString *path = [documentsPath stringByAppendingPathComponent:@"test.png"];
[data writeToFile:path atomically:YES];
在上面,我使用了一个掩码和绘制轮廓的路径。如果你真的需要使用UIImage
蒙版而不是路径,你也可以这样做,但你可能需要一个图像用于蒙版,另一个图像用于轮廓。这取决于所需的效果。但是如果掩模是圆形的,使用路径可以提供更大的灵活性(并且更容易实现)。
如果您使用图像作为边界(而不是路径),则可以产生如下内容:
要做到这一点,你可能想要两个图像(除了可爱的小猫),即一个(我称之为outlineImage
),它代表你最后会添加的笔画。在此图像中,我将透明背景设为绿色,这样您就可以看到我有透明像素的位置:
然后我有另一张图像(我称之为clipImage
),这是一个与上面outlineImage
图像完全相同的面具(再次,绿色是透明部分):
然后drawRect
看起来像:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
// draw full image at 50%
[self.image drawInRect:self.imageFrame blendMode:kCGBlendModeNormal alpha:0.5];
// draw clipped image at 100%
if (self.clipImage) {
CGContextSaveGState(context);
CGContextTranslateCTM(context, 0, rect.size.height);
CGContextScaleCTM(context, 1, -1);
CGRect clipFrame = self.clipOutlineFrame;
clipFrame.origin.y = rect.size.height - clipFrame.origin.y - clipFrame.size.height;
CGContextClipToMask(context, clipFrame, self.clipImage.CGImage);
CGRect imageframe = self.imageFrame;
imageframe.origin.y = rect.size.height - imageframe.origin.y - imageframe.size.height;
CGContextDrawImage(context, imageframe, self.image.CGImage);
CGContextRestoreGState(context);
}
// now draw outline image
if (self.outlineImage)
[self.outlineImage drawInRect:self.clipOutlineFrame];
}
然后用以下内容调用它:
ImageMaskView *view = [[ImageMaskView alloc] initWithFrame:self.view.bounds];
[self.view addSubview:view];
view.backgroundColor = [UIColor whiteColor];
UIImage *image = [UIImage imageNamed:@"kitten.jpeg"];
view.image = image;
view.imageFrame = CGRectMake(10, 100, image.size.width, image.size.height);
view.outlineImage = [UIImage imageNamed:@"loopempty.png"]; // this is the image of the stroke around the masked portion of the underlying image
view.clipImage = [UIImage imageNamed:@"loopsolid.png"]; // this is the clipping mask (same size as outlineImage)
view.clipOutlineFrame = CGRectMake(20, 110, view.outlineImage.size.width, view.outlineImage.size.height);