全局变量在50%的时间内没有足够快地更新

时间:2014-03-30 21:54:25

标签: ios objective-c global-variables avfoundation instance-variables

我有一张带照片的应用程序。当用户按下按钮拍照时,我将名为self.hasUserTakenAPhoto的全局NSString变量设置为YES。使用后置摄像头时,这种方式可以100%完美地工作。但是,只有50%的时间使用前置摄像头,我不知道为什么。

下面是重要的代码片段以及他们所做的快速描述。

这是我的viewDidLoad:

- (void)viewDidLoad
{

    [super viewDidLoad];
    // Do any additional setup after loading the view.


    self.topHalfView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height/2);

    self.takingPhotoView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);

    self.afterPhotoView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);


    self.bottomHalfView.frame = CGRectMake(0, 240, self.view.bounds.size.width, self.view.bounds.size.height/2);



    PFFile *imageFile = [self.message objectForKey:@"file"];

    NSURL *imageFileURL = [[NSURL alloc]initWithString:imageFile.url];

    imageFile = nil;



    self.imageData = [NSData dataWithContentsOfURL:imageFileURL];

    imageFileURL = nil;

    self.topHalfView.image = [UIImage imageWithData:self.imageData];

    //START CREATING THE SESSION

    self.session =[[AVCaptureSession alloc]init];


    [self.session setSessionPreset:AVCaptureSessionPresetPhoto];



    self.inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];


    NSError *error;


    self.deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:self.inputDevice error:&error];



    if([self.session canAddInput:self.deviceInput])
        [self.session addInput:self.deviceInput];


    _previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:_session];




    self.rootLayer = [[self view]layer];


    [self.rootLayer setMasksToBounds:YES];


    [_previewLayer setFrame:CGRectMake(0, 240, self.rootLayer.bounds.size.width, self.rootLayer.bounds.size.height/2)];


    [_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];


    [self.rootLayer insertSublayer:_previewLayer atIndex:0];




    self.videoOutput = [[AVCaptureVideoDataOutput alloc] init];
    self.videoOutput.videoSettings = @{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) };

    [self.session addOutput:self.videoOutput];

    dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);

    [self.videoOutput setSampleBufferDelegate:self queue:queue];



    [_session startRunning];

} 

viewDidLoad的重要部分从我离开//START CREATING THE SESSION

的评论开始

我基本上创建会话然后开始运行它。我已将此视图控制器设置为AVCaptureVideoDataOutputSampleBufferDelegate,因此只要会话开始运行,就会开始调用以下方法。

- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection
{

    //Sample buffer data is being sent, but don't actually use it until self.hasUserTakenAPhoto has been set to YES.


    NSLog(@"Has the user taken a photo?: %@", self.hasUserTakenAPhoto);

    if([self.hasUserTakenAPhoto isEqualToString:@"YES"]) {

   //Now that self.hasUserTakenAPhoto is equal to YES, grab the current sample buffer and use it for the value of self.image aka the captured photo.

        self.image = [self imageFromSampleBuffer:sampleBuffer];

    }

}

此代码每秒都会从摄像头接收视频输出,但在self.hasUserTakenAPhoto等于YES之前,我实际上并没有对其进行任何操作。一旦字符串值为YES,我就会使用相机中的当前sampleBuffer并将其放在名为self.image

的全局变量中

所以,这是self.hasUserTakenAPhoto实际设置为YES的时候。

下面是我的IBAction代码,当用户按下按钮捕获照片时调用该代码。当这段代码运行时会发生很多事情,但真正重要的是第一句话:self.hasUserTakenAPhoto = @"YES";

-(IBAction)stillImageCapture {


    self.hasUserTakenAPhoto = @"YES";

    [self.session stopRunning];


      if(self.inputDevice.position == 2) {

        self.image = [self selfieCorrection:self.image];


    } else {


        self.image = [self rotate:UIImageOrientationRight];


    }



    CGFloat widthToHeightRatio = _previewLayer.bounds.size.width / _previewLayer.bounds.size.height;

    CGRect cropRect;
    // Set the crop rect's smaller dimension to match the image's smaller dimension, and
    // scale its other dimension according to the width:height ratio.
    if (self.image.size.width < self.image.size.height) {
        cropRect.size.width = self.image.size.width;
        cropRect.size.height = cropRect.size.width / widthToHeightRatio;
    } else {
        cropRect.size.width = self.image.size.height * widthToHeightRatio;
        cropRect.size.height = self.image.size.height;
    }

    // Center the rect in the longer dimension
    if (cropRect.size.width < cropRect.size.height) {
        cropRect.origin.x = 0;
        cropRect.origin.y = (self.image.size.height - cropRect.size.height)/2.0;

        NSLog(@"Y Math: %f", (self.image.size.height - cropRect.size.height));


    } else {
        cropRect.origin.x = (self.image.size.width - cropRect.size.width)/2.0;
        cropRect.origin.y = 0;



        float cropValueDoubled = self.image.size.height - cropRect.size.height;


        float final = cropValueDoubled/2;


        finalXValueForCrop = final;


    }



    CGRect cropRectFinal = CGRectMake(cropRect.origin.x, finalXValueForCrop, cropRect.size.width, cropRect.size.height);

    CGImageRef imageRef = CGImageCreateWithImageInRect([self.image CGImage], cropRectFinal);

    UIImage *image2 = [[UIImage alloc]initWithCGImage:imageRef];


    self.image = image2;


    CGImageRelease(imageRef);



    self.bottomHalfView.image = self.image;


    if ([self.hasUserTakenAPhoto isEqual:@"YES"]) {


        [self.takingPhotoView setHidden:YES];


        self.image = [self screenshot];


        [_afterPhotoView setHidden:NO];


    }

}

所以基本上viewDidLoad方法运行并且会话开始,会话将相机看到的所有内容发送到captureOutput方法,然后一旦用户按下“拍照”按钮我们将self.hasUserTakenAPhoto的字符串值设置为YES,会话停止,并且由于self.hasUserTakenAPhoto现在等于YES,captureOutput方法将最后一个相机缓冲区放入我可以使用self.image对象。

我无法弄清楚这一点,因为就像我说的那样,使用后置摄像头时它有100%的工作时间。但是,使用前置摄像头时,它只能在50%的时间内工作。

我已经将问题缩小到self.hasUserTakenAPhoto在使用面对相机时没有足够快地更新到YES的事实,我知道因为如果你查看我的第二个代码我发布了它有NSLog(@"Has the user taken a photo?: %@", self.hasUserTakenAPhoto);的陈述。

如果此操作正常,并且用户刚刚按下按钮以捕获也停止会话的照片,则NSLog(@"Has the user taken a photo?: %@", self.hasUserTakenAPhoto);运行的最后一次将以正确的值YES打印。

但是,当它无法正常工作且更新速度不够快时,它最后一次运行时仍然会打印到值为null的日志。

为什么self.hasUserTakenAPhoto在使用前置摄像头时50%的时间内没有足够快速更新?即使我们无法弄清楚这一点,也没关系。我只需要帮助,然后提出替代解决方案。

感谢您的帮助。

1 个答案:

答案 0 :(得分:0)

我认为这是一个调度问题。在方法的返回点

– captureOutput:didOutputSampleBuffer:fromConnection:
– captureOutput:didDropSampleBuffer:fromConnection:

添加CFRunLoopRun()