我正在开发一个iOS应用程序,我需要进行一些对象实时扫描。为此,我需要在示例中每秒采用3或4帧。这是我创建捕获会话的代码:
// Create an AVCaptureSession
AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
captureSession.sessionPreset = AVCaptureSessionPresetHigh;
// Find a suitable AVCaptureDevice
AVCaptureDevice *photoCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create and add an AVCaptureDeviceInput
NSError *error = nil;
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:photoCaptureDevice error:&error];
if(videoInput){
[captureSession addInput:videoInput];
}
// Create and add an AVCaptureVideoDataOutput
AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init];
// we want BGRA, both CoreGraphics and OpenGL work well with 'BGRA'
NSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[videoOutput setVideoSettings:rgbOutputSettings];
// Configure your output, and start the session
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[videoOutput setSampleBufferDelegate:self queue:queue];
if(videoOutput){
[captureSession addOutput:videoOutput];
}
[captureSession startRunning];
// Setting up the preview layer for the camera
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
previewLayer.frame = cameraViewCanvas.bounds;
// ADDING FINAL VIEW layer TO THE MAIN VIEW sublayer
[cameraViewCanvas.layer addSublayer:previewLayer];
并且在队列上调用委托方法:
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
if(isCapturing){
NSLog(@"output");
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments];
UIImage *newFrame = [[UIImage alloc] initWithCIImage:ciImage];
[self showImage:newFrame];
}
}
问题是我无法在屏幕上看到图像,没有错误和警告,但图像未显示。我的问题是 - 我是否在正确的道路上,需要在我的代码中修复哪些内容才能在屏幕上显示图像?
答案 0 :(得分:0)
迟来的,但问题可能是由于未在主线程中设置图像(captureOutput在您创建的单独的调度队列中调用,最有可能)。
dispatch_async(dispatch_get_main_queue(), ^{
[self showImage:newFrame];
});
或
[self performSelectorOnMainThread:@selector(showImage:) newFrame waitUntilDone:YES];