将图像捕获用于CALayer时,图像捕获会失真

时间:2014-02-20 06:29:43

标签: ios iphone objective-c avfoundation avcapturesession

我正在拍摄照片应用。应用程序的预览图层设置为使用此代码占据屏幕的一半:

[_previewLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];

这看起来很完美,当用户正在观看相机的“预览”/他们在拍照时看到的内容时,根本没有失真。

然而,一旦他们实际拍摄照片,我创建一个子图层并将其框架属性设置为我的预览图层的属性,并将照片设置为子图层的内容。

这在技术上有效。一旦用户拍摄照片,照片就会显示在屏幕的上半部分。

唯一的问题是照片失真。

它看起来很舒服,几乎就像我正在拍摄风景照片。

非常感谢任何帮助我对此非常绝望,并且在今天全天工作后无法修复它。

以下是我的所有视图控制器代码:

#import "MediaCaptureVC.h"

@interface MediaCaptureVC ()

@end

@implementation MediaCaptureVC

- (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil
{
    self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil];
    if (self) {
        // Custom initialization
    }
    return self;
}

- (void)viewDidLoad
{

    [super viewDidLoad];
    // Do any additional setup after loading the view.


    AVCaptureSession *session =[[AVCaptureSession alloc]init];


    [session setSessionPreset:AVCaptureSessionPresetPhoto];


    AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];


    NSError *error = [[NSError alloc]init];

    AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];


    if([session canAddInput:deviceInput])
        [session addInput:deviceInput];


    _previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:session];


    [_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];

    CALayer *rootLayer = [[self view]layer];

    [rootLayer setMasksToBounds:YES];


    [_previewLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];


    [rootLayer insertSublayer:_previewLayer atIndex:0];


    _stillImageOutput = [[AVCaptureStillImageOutput alloc] init];

  [session addOutput:_stillImageOutput];

    [session startRunning];


    }

- (void)didReceiveMemoryWarning
{
    [super didReceiveMemoryWarning];
    // Dispose of any resources that can be recreated.
}


-(UIImage*) rotate:(UIImage*) src andOrientation:(UIImageOrientation)orientation
{
    UIGraphicsBeginImageContext(src.size);

    CGContextRef context=(UIGraphicsGetCurrentContext());

    if (orientation == UIImageOrientationRight) {
        CGContextRotateCTM (context, 90/180*M_PI) ;
    } else if (orientation == UIImageOrientationLeft) {
        CGContextRotateCTM (context, -90/180*M_PI);
    } else if (orientation == UIImageOrientationDown) {
        // NOTHING
    } else if (orientation == UIImageOrientationUp) {
        CGContextRotateCTM (context, 90/180*M_PI);
    }

    [src drawAtPoint:CGPointMake(0, 0)];
    UIImage *img=UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return img;

}



-(IBAction)stillImageCapture {

    AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in _stillImageOutput.connections){
        for (AVCaptureInputPort *port in [connection inputPorts]){

            if ([[port mediaType] isEqual:AVMediaTypeVideo]){

                videoConnection = connection;
                break;
            }
        }
        if (videoConnection) {
            break;
        }
    }

    NSLog(@"about to request a capture from: %@", _stillImageOutput);

[_stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {


if(imageDataSampleBuffer) {

           NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];


            UIImage *image = [[UIImage alloc]initWithData:imageData];


        image = [self rotate:image andOrientation:image.imageOrientation];


            CALayer *subLayer = [CALayer layer];

            CGImageRef imageRef = image.CGImage;

    subLayer.contents = (id)[UIImage imageWithCGImage:imageRef].CGImage;

          subLayer.frame = _previewLayer.frame;

            CALayer *rootLayer = [[self view]layer];

          [rootLayer setMasksToBounds:YES];

            [subLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];

           [_previewLayer addSublayer:subLayer];


            NSLog(@"%@", subLayer.contents);

            NSLog(@"Orientation: %d", image.imageOrientation);

        }

    }];

}

@end

1 个答案:

答案 0 :(得分:0)

嗨,我希望这可以帮到你 -

代码似乎比它应该更复杂,因为大多数代码是在CALayer级别而不是imageView / view级别完成的,但我认为问题在于从原始捕获到迷你视口的帧的比例是不同的,这在这个陈述中扭曲了UIImage:

  [subLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];

需要做的是捕获sublayer.frame的比例并获得适合rootlayer或与之关联的Image视图的最佳大小

我之前有一些代码:在处理比例之前编写一个子程序(注意你需要调整框架的原点以得到你想要的!)

...     CGRect newbounds = [self figure_proportion:image to_fit_rect(rootLayer.frame)        if(newbounds.size.height< rootLayer.frame.size.height){               rootLayer .....(调整图像视图帧原点的代码)

  -(CGRect) figure_proportion:(UIImage *) image2 to_fit_rect:(CGRect) rect  {
     CGSize image_size = image2.size;
      CGRect newrect = rect;
    float wfactor = image_size.width/ image_size.height;
    float hfactor = image_size.height/ image_size.width;

   if (image2.size.width > image2.size.height) {
       newrect.size.width = rect.size.width;
       newrect.size.height = (rect.size.width   * hfactor);
     }
   else if (image2.size.height > image2.size.width) {
       newrect.size.height = rect.size.height;
       newrect.size.width = (rect.size.height   * wfactor);
    }
   else {
      newrect.size.width = rect.size.width;
      newrect.size.height = newrect.size.width;
   }
   if (newrect.size.height > rect.size.height) {
       newrect.size.height = rect.size.height;
       newrect.size.width = (newrect.size.height* wfactor);
    }
   if (newrect.size.width > rect.size.width) {
       newrect.size.width = rect.size.width;
       newrect.size.height = (newrect.size.width* hfactor);
  }
   return(newrect);


   }