如何从ios 5中的源图像中检测到脸部的肤色?

时间:2012-05-09 13:01:22

标签: objective-c ios5 face-detection object-detection cgcolorspace

我需要为脸部的皮肤上色... 我如何找到肤色?

现在我通过 RGB 像素值获得肤色... 我仍然面临的问题是我匹配颜色坐标以匹配特定颜色范围的皮肤......但是仍然有一些脸部区域不在我的颜色范围内,那么它不是那个区域的颜色..

除了脸部区域可能会落在该区域之外,该区域也会着色......

关于我的问题的任何想法......

提前致谢....

我的代码:

    -(void)colorImageBySliderValue:(float)value WithImage:(UIImage*)needToModified
{

    CGContextRef ctx;

    CGImageRef imageRef = needToModified.CGImage;
    NSUInteger width = CGImageGetWidth(imageRef);
    NSUInteger height = CGImageGetHeight(imageRef);
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    unsigned char *rawData = malloc(self.sourceImage.image.size.height * self.sourceImage.image.size.width * 10);
    NSUInteger bytesPerPixel = 4;
    NSUInteger bytesPerRow = bytesPerPixel * self.sourceImage.image.size.width;
    NSUInteger bitsPerComponent = 8;
    CGContextRef context1 = CGBitmapContextCreate(rawData, self.sourceImage.image.size.width, self.sourceImage.image.size.height,
                                                  bitsPerComponent, bytesPerRow, colorSpace,
                                                  kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
    CGColorSpaceRelease(colorSpace);

    CGContextDrawImage(context1, CGRectMake(0, 0, self.sourceImage.image.size.width, self.sourceImage.image.size.height), imageRef);

    NSLog(@"%d::%d",width,height);


    for(int ii = 0 ; ii < 768 * 1024 ; ii+=4)
    {

        int  R = rawData[ii];
        int G = rawData[ii+1];
        int B = rawData[ii+2];

        //        NSLog(@"%d   %d   %d", R, G, B);
             if( ( (R>60)&&(R<237) ) || ((G>10)&&(G<120))||((B>4) && (B<120)))
//        if( ( (R>100)&&(R<186) ) || ((G>56)&&(G<130))||((B>30) && (B<120)))
//        if( ( (R>188)&&(R<228) ) || ((G>123)&&(G<163))||((B>85) && (B<125)))
            //        if( ( (R>95)&&(R<220) ) || ((G>40)&&(G<210))||((B>20) && (B<170)))
        {


            rawData[ii+1]=R;//13; 
            rawData[ii+2]=G;//43; 
            rawData[ii+3]=value;//63
            //            NSLog(@"entered");
        }
    }


    ctx = CGBitmapContextCreate(rawData,
                                CGImageGetWidth( imageRef ),
                                CGImageGetHeight( imageRef ),
                                8,
                                CGImageGetBytesPerRow( imageRef ),
                                CGImageGetColorSpace( imageRef ),
                                kCGImageAlphaPremultipliedLast ); 

    imageRef = CGBitmapContextCreateImage(ctx);
    UIImage* rawImage = [UIImage imageWithCGImage:imageRef]; 
    UIImageView *ty=[[UIImageView alloc]initWithFrame:CGRectMake(100, 200, 400, 400)];
    ty.image=rawImage;
    [self.view addSubview:ty];
    CGContextRelease(context1);
    CGContextRelease(ctx);  
    free(rawData);   
}

此致

SpyNet的

My output image...

Another sample

1 个答案:

答案 0 :(得分:1)

你可以创建两个图像,一个是原始图像,另一个是修改后的图像像素,并修复... 并通过首先添加原始图像和仅在原始图像上添加修改后的像素来覆盖图像。

  for (int index=0; index<length; index+=4) 
        {
            if (data2[index]==data1[index])
            {
                data1[index]=data2[index];
                data1[index+1]=data2[index+1];//aRed+value;//aRed;//13; 
                data1[index+2]=data2[index+2];//aGreen;//aGreen;//43; 
                data1[index+3]=data2[index+3];
            }