我想试试像iPhone那样的iPhone上的一些图像过滤功能。 我使用imagePickerController从相机胶卷中获取照片。据我所知,imagePickerController返回的图像被减少以节省内存。将原始图像加载到UIImage并不明智。但是,我如何处理图像然后将其保存为原始像素? 我使用iPhone 4S作为我的开发设备。
相机胶卷中的原始照片为3264 * 2448。
UIImagePickerControllerOriginalImage返回的图像是1920 * 1440
UIImagePickerControllerEditedImage返回的图像为640 * 640
imageViewOld(使用UIImagePickerControllerCropRect [80,216,1280,1280]裁剪图像返回UIImagePickerControllerOriginalImage)是1280 * 1224
imageViewNew(使用双倍大小的UIImagePickerControllerCropRect [80,216,2560,2560]来裁剪UIImagePickerControllerOriginalImage返回的图像)是1840 * 1224.
我检查同一张照片由instagram继续是1280 * 1280
我的问题是:
为什么UIImagePickerControllerEditedImage不会将图像返回1280 * 1280?作为 UIImagePickerControllerCropRect显示它被1280 * 1280平方切割?
如何将原始照片的方形切割为2448 * 2448图像?
提前致谢。 以下是我的代码:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
NSString *mediaType = [info objectForKey:UIImagePickerControllerMediaType];
if ([mediaType isEqualToString:@"public.image"])
{
UIImage *imageEdited = [info objectForKey:UIImagePickerControllerEditedImage];
UIImage *imagePicked = [info objectForKey:UIImagePickerControllerOriginalImage];
CGRect cropRect;
cropRect = [[info valueForKey:@"UIImagePickerControllerCropRect"] CGRectValue];
NSLog(@"Original width = %f height= %f ",imagePicked.size.width, imagePicked.size.height);
//Original width = 1440.000000 height= 1920.000000
NSLog(@"imageEdited width = %f height = %f",imageEdited.size.width, imageEdited.size.height);
//imageEdited width = 640.000000 height = 640.000000
NSLog(@"corpRect %f %f %f %f", cropRect.origin.x, cropRect.origin.y , cropRect.size.width, cropRect.size.height);
//corpRect 80.000000 216.000000 1280.000000 1280.000000
CGRect rectNew = CGRectMake(cropRect.origin.x, cropRect.origin.y , cropRect.size.width*2, cropRect.size.height*2);
CGRect rectOld = CGRectMake(cropRect.origin.x, cropRect.origin.y , cropRect.size.width, cropRect.size.height);
CGImageRef imageRefNew = CGImageCreateWithImageInRect([imagePicked CGImage], rectNew);
CGImageRef imageRefOld = CGImageCreateWithImageInRect([imagePicked CGImage], rectOld);
UIImageView *imageViewNew = [[UIImageView alloc] initWithImage:[UIImage imageWithCGImage:imageRefNew]];
CGImageRelease(imageRefNew);
UIImageView *imageViewOld = [[UIImageView alloc] initWithImage:[UIImage imageWithCGImage:imageRefOld]];
CGImageRelease(imageRefOld);
NSLog(@"imageViewNew width = %f height = %f",imageViewNew.image.size.width, imageViewNew.image.size.height);
//imageViewNew width = 1840.000000 height = 1224.000000
NSLog(@"imageViewOld width = %f height = %f",imageViewOld.image.size.width, imageViewOld.image.size.height);
//imageViewOld width = 1280.000000 height = 1224.000000
UIImageWriteToSavedPhotosAlbum(imageEdited, nil, nil, NULL);
UIImageWriteToSavedPhotosAlbum([imageViewNew.image imageRotatedByDegrees:90.0], nil, nil, NULL);
UIImageWriteToSavedPhotosAlbum([imageViewOld.image imageRotatedByDegrees:90.0], nil, nil, NULL);
//assign the image to an UIImage Control
self.imageV.contentMode = UIViewContentModeScaleAspectFit;
self.imageV.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.width);
self.imageV.image = imageEdited;
}
[self dismissModalViewControllerAnimated:YES];
}
答案 0 :(得分:12)
如您所见,UIImagePickerController将返回缩小的编辑图像,有时640x640有时320x320(取决于设备)。
你的问题:
如何将原始照片的正方形切割为2448 * 2448图像?
要执行此操作,您需要先使用UIImagePickerControllerCropRect
从使用信息词典的UIImagePickerControllerOriginalImage
键获取的原始图像创建新图像。使用Quartz Core方法CGImageCreateWithImageInRect
,您可以创建一个新图像,其中只包含由传递的rect限定的像素;在这种情况下,裁剪矩形。您需要考虑方向才能使其正常工作。然后,您只需将图像缩放到所需的大小。重要的是要注意,裁剪矩形在正确定向后相对于原始图像,而不是从相机或照片库中出来。这就是为什么当我们开始使用Quartz方法创建新图像等时,我们需要转换裁剪矩形以匹配方向。
我上面的代码并将其设置为根据裁剪矩形从原始图像创建1280x1280图像。这里仍然存在一些边缘情况,即考虑到裁剪矩形有时可能具有负值(代码假设方形裁剪矩形)尚未解决。
transformCGRectForUIImageOrientation
函数来自NiftyBean 以下是包含更改的代码:更新在此下方添加了新代码,可以处理丢失的案例。
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
NSString *mediaType = [info objectForKey:UIImagePickerControllerMediaType];
if ([mediaType isEqualToString:@"public.image"])
{
UIImage *imageEdited = [info objectForKey:UIImagePickerControllerEditedImage];
UIImage *imagePicked = [info objectForKey:UIImagePickerControllerOriginalImage];
CGRect cropRect;
cropRect = [[info valueForKey:@"UIImagePickerControllerCropRect"] CGRectValue];
NSLog(@"Original width = %f height= %f ",imagePicked.size.width, imagePicked.size.height);
//Original width = 1440.000000 height= 1920.000000
NSLog(@"imageEdited width = %f height = %f",imageEdited.size.width, imageEdited.size.height);
//imageEdited width = 640.000000 height = 640.000000
NSLog(@"corpRect %@", NSStringFromCGRect(cropRect));
//corpRect 80.000000 216.000000 1280.000000 1280.000000
CGSize finalSize = CGSizeMake(1280,1280);
CGImageRef imagePickedRef = imagePicked.CGImage;
CGRect transformedRect = transformCGRectForUIImageOrientation(cropRect, imagePicked.imageOrientation, imagePicked.size);
CGImageRef cropRectImage = CGImageCreateWithImageInRect(imagePickedRef, transformedRect);
CGColorSpaceRef colorspace = CGImageGetColorSpace(imagePickedRef);
CGContextRef context = CGBitmapContextCreate(NULL,
finalSize.width,
finalSize.height,
CGImageGetBitsPerComponent(imagePickedRef),
CGImageGetBytesPerRow(imagePickedRef),
colorspace,
CGImageGetAlphaInfo(imagePickedRef));
CGContextSetInterpolationQuality(context, kCGInterpolationHigh); //Give the context a hint that we want high quality during the scale
CGContextDrawImage(context, CGRectMake(0, 0, finalSize.width, finalSize.height), cropRectImage);
CGImageRelease(cropRectImage);
CGImageRef instaImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
//assign the image to an UIImage Control
UIImage *image = [UIImage imageWithCGImage:instaImage scale:imagePicked.scale orientation:imagePicked.imageOrientation];
self.imageView.contentMode = UIViewContentModeScaleAspectFit;
self.imageView.image = image;
CGImageRelease(instaImage);
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
}
[self dismissModalViewControllerAnimated:YES];
}
CGRect transformCGRectForUIImageOrientation(CGRect source, UIImageOrientation orientation, CGSize imageSize) {
switch (orientation) {
case UIImageOrientationLeft: { // EXIF #8
CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(imageSize.height, 0.0);
CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,M_PI_2);
return CGRectApplyAffineTransform(source, txCompound);
}
case UIImageOrientationDown: { // EXIF #3
CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(imageSize.width, imageSize.height);
CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,M_PI);
return CGRectApplyAffineTransform(source, txCompound);
}
case UIImageOrientationRight: { // EXIF #6
CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(0.0, imageSize.width);
CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,M_PI + M_PI_2);
return CGRectApplyAffineTransform(source, txCompound);
}
case UIImageOrientationUp: // EXIF #1 - do nothing
default: // EXIF 2,4,5,7 - ignore
return source;
}
}
更新我已经制定了几种方法来处理其余的情况。 这些步骤基本相同,只需进行一些修改。
UIImagePickerController
。在这些情况下,方形图像是
充满你选择的颜色。新代码
// CropRect is assumed to be in UIImageOrientationUp, as it is delivered this way from the UIImagePickerController when using AllowsImageEditing is on.
// The sourceImage can be in any orientation, the crop will be transformed to match
// The output image bounds define the final size of the image, the image will be scaled to fit,(AspectFit) the bounds, the fill color will be
// used for areas that are not covered by the scaled image.
-(UIImage *)cropImage:(UIImage *)sourceImage cropRect:(CGRect)cropRect aspectFitBounds:(CGSize)finalImageSize fillColor:(UIColor *)fillColor {
CGImageRef sourceImageRef = sourceImage.CGImage;
//Since the crop rect is in UIImageOrientationUp we need to transform it to match the source image.
CGAffineTransform rectTransform = [self transformSize:sourceImage.size orientation:sourceImage.imageOrientation];
CGRect transformedRect = CGRectApplyAffineTransform(cropRect, rectTransform);
//Now we get just the region of the source image that we are interested in.
CGImageRef cropRectImage = CGImageCreateWithImageInRect(sourceImageRef, transformedRect);
//Figure out which dimension fits within our final size and calculate the aspect correct rect that will fit in our new bounds
CGFloat horizontalRatio = finalImageSize.width / CGImageGetWidth(cropRectImage);
CGFloat verticalRatio = finalImageSize.height / CGImageGetHeight(cropRectImage);
CGFloat ratio = MIN(horizontalRatio, verticalRatio); //Aspect Fit
CGSize aspectFitSize = CGSizeMake(CGImageGetWidth(cropRectImage) * ratio, CGImageGetHeight(cropRectImage) * ratio);
CGContextRef context = CGBitmapContextCreate(NULL,
finalImageSize.width,
finalImageSize.height,
CGImageGetBitsPerComponent(cropRectImage),
0,
CGImageGetColorSpace(cropRectImage),
CGImageGetBitmapInfo(cropRectImage));
if (context == NULL) {
NSLog(@"NULL CONTEXT!");
}
//Fill with our background color
CGContextSetFillColorWithColor(context, fillColor.CGColor);
CGContextFillRect(context, CGRectMake(0, 0, finalImageSize.width, finalImageSize.height));
//We need to rotate and transform the context based on the orientation of the source image.
CGAffineTransform contextTransform = [self transformSize:finalImageSize orientation:sourceImage.imageOrientation];
CGContextConcatCTM(context, contextTransform);
//Give the context a hint that we want high quality during the scale
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
//Draw our image centered vertically and horizontally in our context.
CGContextDrawImage(context, CGRectMake((finalImageSize.width-aspectFitSize.width)/2, (finalImageSize.height-aspectFitSize.height)/2, aspectFitSize.width, aspectFitSize.height), cropRectImage);
//Start cleaning up..
CGImageRelease(cropRectImage);
CGImageRef finalImageRef = CGBitmapContextCreateImage(context);
UIImage *finalImage = [UIImage imageWithCGImage:finalImageRef];
CGContextRelease(context);
CGImageRelease(finalImageRef);
return finalImage;
}
//Creates a transform that will correctly rotate and translate for the passed orientation.
//Based on code from niftyBean.com
- (CGAffineTransform) transformSize:(CGSize)imageSize orientation:(UIImageOrientation)orientation {
CGAffineTransform transform = CGAffineTransformIdentity;
switch (orientation) {
case UIImageOrientationLeft: { // EXIF #8
CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(imageSize.height, 0.0);
CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,M_PI_2);
transform = txCompound;
break;
}
case UIImageOrientationDown: { // EXIF #3
CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(imageSize.width, imageSize.height);
CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,M_PI);
transform = txCompound;
break;
}
case UIImageOrientationRight: { // EXIF #6
CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(0.0, imageSize.width);
CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,-M_PI_2);
transform = txCompound;
break;
}
case UIImageOrientationUp: // EXIF #1 - do nothing
default: // EXIF 2,4,5,7 - ignore
break;
}
return transform;
}