好的,我已经尝试过每一个我能找到的例子,遗憾的是大多数都来自2008-2009,iOS5似乎非常不同。我只是想调整大小和图像,以便短边是我指定的尺寸,并保持成比例。
我正在使用AVFoundation从相机中获取图像,我将它通过UIImage转换为CIImage,这样我就可以应用滤镜并摆弄它,然后转换回UIImage进行保存。
- (void) startCamera {
session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetPhoto;
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = _cameraView.bounds;
[_cameraView.layer addSublayer:captureVideoPreviewLayer];
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
//no cam, handle error - need to put up a nice happy note here
NSLog(@"ERROR: %@", error);
}
[session addInput:input];
stillImage = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG , AVVideoCodecKey, nil];
[stillImage setOutputSettings:outputSettings];
[session addOutput:stillImage];
[session startRunning];
}
- (IBAction) _cameraButtonPress:(id)sender {
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImage.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
[stillImage captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *startImage = [[UIImage alloc] initWithData:imageData];
//resizing of image to take place before anything else
UIImage *image = [startImage imageScaledToFitSize:_exportSSize]; //resizes to the size given in the prefs
//change the context to render using cpu, so on app exit renders get completed
context = [CIContext contextWithOptions:
[NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES]
forKey:kCIContextUseSoftwareRenderer]];
//set up the saving library
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
//create a new ciimage
CIImage *greyImage = [[CIImage alloc] initWithCGImage:[image CGImage]];
//create a CIMonochrome filter
desaturate = [CIFilter filterWithName:@"CIColorMonochrome" keysAndValues:kCIInputImageKey, greyImage, @"inputIntensity", [NSNumber numberWithFloat:0.00], nil];
//[crop setValue:ciImage forKey:@"inputImage"];
//[crop setValue:[CIVector vectorWithX:0.0f Y:0.0f Z:_exportSize W:_exportSize] forKey:@"inputRectangle"];
CIImage *croppedColourImage = [desaturate valueForKey:@"outputImage"];
//convert it to a cgimage for saving
CGImageRef cropColourImage = [context createCGImage:croppedColourImage fromRect:[croppedColourImage extent]];
//saving of the images
[library writeImageToSavedPhotosAlbum:cropColourImage metadata:[croppedColourImage properties] completionBlock:^(NSURL *assetURL, NSError *error){
if (error) {
NSLog(@"ERROR in image save: %@", error);
} else
{
NSLog(@"SUCCESS in image save");
//in here we'll put a nice animation or something
CGImageRelease(cropColourImage);
}
}];
}];
}
此代码是测试代码,因此会对其进行各种尝试,并为此道歉。
我最接近的是使用Matt Gemmell的代码here
但是,无论我尝试什么,总是拉伸图像。我想调整图像大小,然后将其裁剪为512像素的正方形。如果我只是做一个CICrop过滤器,它需要左上角512像素,所以我失去了很多图像(抓取高分辨率照片,因为后来我也想要导出1800x1800图像)。我的想法是首先调整图像大小,然后裁剪。但无论我做什么,图像都会被拉伸。
所以我的问题是,在iOS5 +中是否有适当的建议识别方法来调整大小和图像?
感谢。
答案 0 :(得分:1)
我找到了一种方法:
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();