我正在制作一个在视图中有摄像头预览的应用程序,在此视图中我想绘制摄像机数据,另一个视图在捕获数据时显示一个小矩形,例如,在BarCode扫描场景中相机显示在视图中,当找到条形码时,将绘制一个矩形,表明它已扫描条形码。 我当前的视图层次结构如下:
View
{
-UIView cameraHolder
{
-UIView highlightView
}
}
我设法让相机显示并扫描东西,但突出显示查看它没有显示,为什么会发生这种情况?
这是用于初始化突出显示视图的代码:
-(void)setUpHiglightView{
self.highlightView = [[UIView alloc] init];
self.highlightView.autoresizingMask = UIViewAutoresizingFlexibleTopMargin|UIViewAutoresizingFlexibleLeftMargin|UIViewAutoresizingFlexibleRightMargin|UIViewAutoresizingFlexibleBottomMargin;
self.highlightView.layer.borderColor = [UIColor greenColor].CGColor;
self.highlightView.layer.borderWidth = 3;
}
这是捕获数据时的代码:
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection{
CGRect highlightViewRect = CGRectZero;
AVMetadataMachineReadableCodeObject *barCodeObject;
NSString *detectionString = nil;
NSArray *barCodeTypes = @[AVMetadataObjectTypeUPCECode, AVMetadataObjectTypeCode39Code, AVMetadataObjectTypeCode39Mod43Code,
AVMetadataObjectTypeEAN13Code, AVMetadataObjectTypeEAN8Code, AVMetadataObjectTypeCode93Code, AVMetadataObjectTypeCode128Code,
AVMetadataObjectTypePDF417Code, AVMetadataObjectTypeQRCode, AVMetadataObjectTypeAztecCode];
for(AVMetadataObject *metadata in metadataObjects){
for(NSString *type in barCodeTypes){
if([metadata.type isEqualToString:type]){
barCodeObject = (AVMetadataMachineReadableCodeObject *)[prevLayer transformedMetadataObjectForMetadataObject:(AVMetadataMachineReadableCodeObject*)metadata];
highlightViewRect = barCodeObject.bounds;
detectionString = [(AVMetadataMachineReadableCodeObject*)metadata stringValue];
break;
}
}
}
if(detectionString != nil){
[self.itemIdTextField setText:detectionString];
}else{
//NSLog(@"Got Nothing");
}
NSLog(@"Position: [%f,%f][%f,%f]",highlightViewRect.origin.x, highlightViewRect.origin.y,highlightViewRect.size.height, highlightViewRect.size.width);
self.highlightView.frame = highlightViewRect;
}
初始化相机的代码:
-(void)setupBarCodeScanner{
[self setUpHiglightView];
session = [[AVCaptureSession alloc] init];
device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if(input){
[session addInput:input];
}else{
[self showAlertDialogWithTitle:@"Error" andMessage:@"There was an error while accessing your camera"];
NSLog(@"Error: %@", error);
}
output = [[AVCaptureMetadataOutput alloc] init];
[output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[session addOutput:output];
output.metadataObjectTypes = [output availableMetadataObjectTypes];
prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
prevLayer.frame = self.cameraHolder.bounds;
prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.cameraHolder.layer addSublayer:prevLayer];
}
非常感谢!
答案 0 :(得分:1)
在captureOutput结尾处将self.highlightView添加到self.view:
[self.view addSubview:self.highlightView];
答案 1 :(得分:0)
看起来你并没有在任何地方添加该视图。您创建它并设置其frame
,但我没有看到您添加到视图层次结构的位置。