如何从AVCaptureSession生成的图像中获取精确大小的帧?

时间:2017-09-05 03:37:50

标签: ios objective-c avcapturesession

enter image description here

我正在开发一个捕获AVCaptureSession内部特定视图的应用程序,如上图所示。

我正在使用AVCaptureStillImageOutput来捕获AVCaptureSession中的图片。问题是我得到的图像具有特定的大小,({2448,3264})。我的解决方案是将此图像转换为我的背景视图的同一帧,以便具有相同的坐标和框架。

使用imageWithImage,我使用了与captureView相同的框架,一切都很顺利。 resizedImage最终为{768,1024},与AVCaptureSession的大小相同。

从现在开始,基于此坐标,我尝试使用CGImageCreateWithImageInRect根据captureView的绿色视图框架裁剪图像。

我得到的输出图像已关闭。我的问题是有一个比CGImageCreateWithImageInRect更好的方法来从AVCaptureSession回来的图像中捕获我想要的确切视图吗?有没有更好的方法来实现我想要实现的目标?任何帮助将不胜感激。提前谢谢!

  AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in stillImageOutput.connections) {
        for (AVCaptureInputPort *port in [connection inputPorts]) {
            if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
                videoConnection = connection;

                //Handle orientation for video

                if(videoConnection.supportsVideoOrientation)
                {
                    if(captureVideoPreviewLayer.connection.videoOrientation == UIInterfaceOrientationLandscapeLeft ){
                        videoConnection.videoOrientation = AVCaptureVideoOrientationLandscapeLeft;
                    }
                    if(captureVideoPreviewLayer.connection.videoOrientation == UIInterfaceOrientationLandscapeRight ){
                        videoConnection.videoOrientation = AVCaptureVideoOrientationLandscapeRight;
                    }
                    if(captureVideoPreviewLayer.connection.videoOrientation == UIInterfaceOrientationPortrait ){
                        videoConnection.videoOrientation = AVCaptureVideoOrientationPortrait;
                    }
                }
                break;
            }
        }
        if (videoConnection) { break; }
    }

    NSLog(@"about to request a capture from: %@", stillImageOutput);
    __weak typeof(self) weakSelf = self;
    [stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {

        NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
        UIImage *image = [[UIImage alloc] initWithData:imageData];
        UIImage *resizedImage = [weakSelf imageWithImage:image scaledToSize:outputImageView.frame.size];

        //Image view to test screenshot of AVCaptureSession
        outputImageView.image = resizedImage;

        //Screenshot of captureView frame (green view)
        CGRect captureFrame = captureView.frame;

        CGImageRef cropRef = CGImageCreateWithImageInRect(resizedImage.CGImage, captureFrame);
        UIImage* cropImage = [UIImage imageWithCGImage:cropRef];

// Image view to test cropped image
        sampleImageView.image = cropImage;

        //Hide Indicator
        [weakSelf hideActivityView];

    }];

- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
    //UIGraphicsBeginImageContext(newSize);
    // In next line, pass 0.0 to use the current device's pixel scaling factor (and thus account for Retina resolution).
    // Pass 1.0 to force exact pixel size.
    UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
    [image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return newImage;
}

用于捕捉图像的方法。

1 个答案:

答案 0 :(得分:0)

我通过将评论所说的像素缩放比例更改为1.0来解决我的问题。

- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
    //UIGraphicsBeginImageContext(newSize);
    // In next line, pass 0.0 to use the current device's pixel scaling factor (and thus account for Retina resolution).
    // Pass 1.0 to force exact pixel size.
    UIGraphicsBeginImageContextWithOptions(newSize, NO, 1.0);
    [image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return newImage;
}