我正在为iOS构建原生扩展,我想在其中实现条形码扫描器。
我已经按照AVCam示例进行了操作,并且我已经在原生应用程序(完整的xcode)中尝试了它并且它运行正常。
现在,我想从Flex移动项目开始使用此代码。我已经能够创建ANE并将其放在Flex Mobile项目上,我可以调用ANE的功能。
它似乎工作正常,但我的问题是我看不到你通过相机看到的东西。我的意思是,我有一个方法,我打电话启动相机并初始化捕获。我还实现了captureOutput委托,最奇怪的是当我运行我的应用程序时,我可以看到initcapture中的日志和captureOutput就像应用程序正在捕获数据,但在iPad中我看不到相机。
这是我使用的代码的一部分:
- (void)initCapture
{
NSLog(@"camera view capture init");
/*We setup the input*/
self.device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:self.device error:nil];
/*We setupt the output*/
captureOutput = [[AVCaptureVideoDataOutput alloc] init];
// If the queue is blocked when new frames are captured, those frames will be automatically dropped
captureOutput.alwaysDiscardsLateVideoFrames = YES;
//captureOutput.minFrameDuration = CMTimeMake(1, 10); Uncomment it to specify a minimum duration for each video frame
[captureOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
// Set the video output to store frame in BGRA (It is supposed to be faster)
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
// Set the video output to store frame in 422YpCbCr8(It is supposed to be faster)
//************************Note this line
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
//And we create a capture session
self.captureSession = [[AVCaptureSession alloc] init];
//We add input and output
[self.captureSession addInput:captureInput];
[self.captureSession addOutput:captureOutput];
if ([self.captureSession canSetSessionPreset:AVCaptureSessionPreset1280x720])
{
NSLog(@"camera view Set preview port to 1280X720");
self.captureSession.sessionPreset = AVCaptureSessionPreset1280x720;
} else
//set to 640x480 if 1280x720 not supported on device
if ([self.captureSession canSetSessionPreset:AVCaptureSessionPreset640x480])
{
NSLog(@"camera view Set preview port to 640X480");
self.captureSession.sessionPreset = AVCaptureSessionPreset640x480;
}
/*We add the preview layer*/
self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: self.captureSession];
if ([self.prevLayer respondsToSelector:@selector(connection)])
self.prevLayer.connection.videoOrientation = AVCaptureVideoOrientationLandscapeLeft;
else
self.prevLayer.orientation = AVCaptureVideoOrientationLandscapeLeft;
self.prevLayer.frame = CGRectMake(150, 0, 700, 700);
self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspect;
[self.view.layer addSublayer: self.prevLayer];
}
- (void) startScanning {
NSLog(@"camera view start scanning");
self.state = LAUNCHING_CAMERA;
[self.captureSession startRunning];
self.prevLayer.hidden = NO;
self.state = CAMERA;
}
#pragma mark AVCaptureSession delegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
NSLog(@"camera view Capture output");
}
我该如何解决这个问题?
非常感谢。
答案 0 :(得分:1)
我想我已经解决了。
而不是:
[self.view.layer addSublayer: self.prevLayer];
我说:
UIViewController *mainController = [UIApplication sharedApplication].keyWindow.rootViewController;
[mainController.view.layer addSublayer: self.prevLayer];
现在,我可以在我的flex应用程序上看到相机。