如何使用metadataOutputRectOfInterestForRect方法和rectOfInterest属性扫描特定区域? (QR码)

时间:2015-09-04 15:12:43

标签: ios swift qr-code avcapturesession

我正在使用Swift构建一个二维码扫描器,一切都在这方面有效。我遇到的问题是,我只想让整个可见AVCaptureVideoPreviewLayer的一小部分区域能够扫描QR码。我发现,为了指定屏幕的哪个区域能够读取/捕获QR码,我必须使用名为AVCaptureMetadataOutput的{​​{1}}属性。麻烦的是,当我将其分配给CGRect时,我无法扫描任何内容。在网上做了更多的研究后,我发现有些人建议我需要使用一个名为rectOfInterest的方法将CGRect转换为属性metadataOutputRectOfInterestForRect实际可以使用的正确格式。但是,我现在遇到的一个大问题是,当我使用这种方法rectOfInterest时,我收到的错误是metadataoutputRectOfInterestForRect。谁能告诉我为什么我会收到这个错误?我相信我根据Apple开发人员文档正确使用这种方法,我相信我需要根据我在网上找到的所有信息来实现我的目标。我将包括我到目前为止找到的文档的链接以及我用来扫描QR码的函数的代码示例

代码示例

CGAffineTransformInvert: singular matrix

请注意导致错误的兴趣点是: func startScan() { // Get an instance of the AVCaptureDevice class to initialize a device object and provide the video // as the media type parameter. let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo) // Get an instance of the AVCaptureDeviceInput class using the previous device object. var error:NSError? let input: AnyObject! = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: &error) if (error != nil) { // If any error occurs, simply log the description of it and don't continue any more. println("\(error?.localizedDescription)") return } // Initialize the captureSession object. captureSession = AVCaptureSession() // Set the input device on the capture session. captureSession?.addInput(input as! AVCaptureInput) // Initialize a AVCaptureMetadataOutput object and set it as the output device to the capture session. let captureMetadataOutput = AVCaptureMetadataOutput() captureSession?.addOutput(captureMetadataOutput) // calculate a centered square rectangle with red border let size = 300 let screenWidth = self.view.frame.size.width let xPos = (CGFloat(screenWidth) / CGFloat(2)) - (CGFloat(size) / CGFloat(2)) let scanRect = CGRect(x: Int(xPos), y: 150, width: size, height: size) // create UIView that will server as a red square to indicate where to place QRCode for scanning scanAreaView = UIView() scanAreaView?.layer.borderColor = UIColor.redColor().CGColor scanAreaView?.layer.borderWidth = 4 scanAreaView?.frame = scanRect view.addSubview(scanAreaView!) // Set delegate and use the default dispatch queue to execute the call back captureMetadataOutput.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue()) captureMetadataOutput.metadataObjectTypes = [AVMetadataObjectTypeQRCode] // Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer. videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession) videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill videoPreviewLayer?.frame = view.layer.bounds captureMetadataOutput.rectOfInterest = videoPreviewLayer!.metadataOutputRectOfInterestForRect(scanRect) view.layer.addSublayer(videoPreviewLayer) // Start video capture. captureSession?.startRunning() // Initialize QR Code Frame to highlight the QR code qrCodeFrameView = UIView() qrCodeFrameView?.layer.borderColor = UIColor.greenColor().CGColor qrCodeFrameView?.layer.borderWidth = 2 view.addSubview(qrCodeFrameView!) view.bringSubviewToFront(qrCodeFrameView!) // Add a button that will be used to close out of the scan view videoBtn.setTitle("Close", forState: .Normal) videoBtn.setTitleColor(UIColor.blackColor(), forState: .Normal) videoBtn.backgroundColor = UIColor.grayColor() videoBtn.layer.cornerRadius = 5.0; videoBtn.frame = CGRectMake(10, 30, 70, 45) videoBtn.addTarget(self, action: "pressClose:", forControlEvents: .TouchUpInside) view.addSubview(videoBtn) view.bringSubviewToFront(scanAreaView!) }

我尝试过的其他事情是直接将CGRect作为参数传递,并导致同样的错误。我也传递了captureMetadataOutput.rectOfInterest = videoPreviewLayer!.metadataOutputRectOfInterestForRect(scanRect)作为参数,因为它确实是我正在寻找的确切大小/区域,并且也会导致相同的确切错误。我已经在网上的其他代码示例中看到了这一点,它们似乎没有我遇到的错误。以下是一些例子:

AVCaptureSession barcode scan

Xcode AVCapturesession scan Barcode in specific frame (rectOfInterest is not working)

Apple文档

metadataOutputRectOfInterestForRect

rectOfInterest

scanAreaView的图片我用作指定区域我正在尝试制作视频预览图层的唯一可扫描区域:

enter image description here

9 个答案:

答案 0 :(得分:26)

我无法用metadataOutputRectOfInterestForRect澄清问题,但是,您也可以直接设置该属性。您需要具有视频宽度和高度的分辨率,您可以提前指定。我很快就使用了640 * 480的设置。如文档中所述,这些值必须是

  

“从左上角的(0,0)延伸到右下角的(1,1),相对于设备的自然方向”。​​

请参阅https://developer.apple.com/documentation/avfoundation/avcaptureoutput/1616304-metadataoutputrectofinterestforr

以下是我尝试过的代码

var x = scanRect.origin.x/480
var y = scanRect.origin.y/640
var width = scanRect.width/480
var height = scanRect.height/640
var scanRectTransformed = CGRectMake(x, y, width, height)
captureMetadataOutput.rectOfInterest = scanRectTransformed

我刚刚在iOS设备上测试它,它似乎工作。

修改

至少我已经解决了metadataOutputRectOfInterestForRect问题。我相信你必须在相机正确设置并运行后执行此操作,因为相机的分辨率尚不可用。

首先,在viewDidLoad()

中添加通知观察者方法
NSNotificationCenter.defaultCenter().addObserver(self, selector: Selector("avCaptureInputPortFormatDescriptionDidChangeNotification:"), name:AVCaptureInputPortFormatDescriptionDidChangeNotification, object: nil)

然后添加以下方法

func avCaptureInputPortFormatDescriptionDidChangeNotification(notification: NSNotification) {

    captureMetadataOutput.rectOfInterest = videoPreviewLayer.metadataOutputRectOfInterestForRect(scanRect)

}

然后,您可以重置rectOfInterest属性。然后,在您的代码中,您可以在didOutputMetadataObjects函数中显示AVMetadataObject

var rect = videoPreviewLayer.rectForMetadataOutputRectOfInterest(YourAVMetadataObject.bounds)

dispatch_async(dispatch_get_main_queue(),{
     self.qrCodeFrameView.frame = rect
})

我试过了,矩形总是在指定区域内。

答案 1 :(得分:15)

在iOS 9.3.2中,我能够metadataoutputRectOfInterestForRectstartRunning AVCaptureSession方法之后立即调用它:

captureSession.startRunning()
let visibleRect = previewLayer.metadataOutputRectOfInterestForRect(previewLayer.bounds)
captureMetadataOutput.rectOfInterest = visibleRect

答案 2 :(得分:8)

Swift 4:

captureSession?.startRunning()
let scanRect = CGRect(x: 0, y: 0, width: 100, height: 100)
let rectOfInterest = layer.metadataOutputRectConverted(fromLayerRect: scanRect)
metaDataOutput.rectOfInterest = rectOfInterest

答案 3 :(得分:4)

我设法创造了一个拥有感兴趣区域的效果。我尝试了所有提出的解决方案,但该区域是CGPoint.zero或具有不适当的大小(在将帧转换为0-1坐标之后)。对于那些无法使regionOfInterest工作并且没有优化检测的人来说,它实际上是一种黑客攻击。

在:

func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) 

我有以下代码:

let visualCodeObject = videoPreviewLayer?.transformedMetadataObject(for: metadataObj)
if self.viewfinderView.frame.contains(visualCodeObject.bounds) { 
    //visual code is inside the viewfinder, you can now handle detection
}

答案 4 :(得分:1)

///之后

    var currentUser = RequestContext.Principal;
    var currentUserName = currentUser.Identity.Name;
    var sender = db.Users.Where(u => u.UserName == currentUserName).FirstOrDefault();
    globalMessage.sentBy = sender;
    db.GlobalMessages.Add(globalMessage);
    db.SaveChanges();

///添加此

captureSession.startRunning()

答案 5 :(得分:0)

要从整个摄像机视图中的小区域(特定区域)读取QRCode /条形码。

<br> **Mandatory to keep the below line after (start running)** <br>
[captureMetadataOutput setRectOfInterest:[_videoPreviewLayer metadataOutputRectOfInterestForRect:scanRect] ];

[_captureSession startRunning];
[captureMetadataOutput setRectOfInterest:[_videoPreviewLayer metadataOutputRectOfInterestForRect:scanRect] ];

注意:

  1. captureMetadataOutput-> AVCaptureMetadataOutput
  2. _videoPreviewLayer-> AVCaptureVideoPreviewLayer
  3. scanRect->选择要读取QRCode的位置。

答案 6 :(得分:0)

我知道已经有解决方案了,现在还很晚,但是我通过捕获完整的视图图像然后使用特定的rect裁剪来实现我的目标。

 func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {

if let imageData = photo.fileDataRepresentation() {
    print(imageData)
    capturedImage = UIImage(data: imageData)

    var crop = cropToPreviewLayer(originalImage: capturedImage!)

    let sb = UIStoryboard(name: "Main", bundle: nil)
    let s = sb.instantiateViewController(withIdentifier: "KeyFobScanned") as! KeyFobScanned
    s.image = crop
    self.navigationController?.pushViewController(s, animated: true)

}
}

private func cropToPreviewLayer(originalImage: UIImage) -> UIImage? {
guard let cgImage = originalImage.cgImage else { return nil }

let scanRect = CGRect(x: stackView.frame.origin.x, y: stackView.frame.origin.y, width: innerView.frame.size.width, height: innerView.frame.size.height)

let outputRect = videoPreviewLayer.metadataOutputRectConverted(fromLayerRect: scanRect)

let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)

let cropRect = CGRect(x: outputRect.origin.x * width, y: outputRect.origin.y * height, width: outputRect.size.width * width, height: outputRect.size.height * height)

if let croppedCGImage = cgImage.cropping(to: cropRect) {
    return UIImage(cgImage: croppedCGImage, scale: 1.0, orientation: originalImage.imageOrientation)
}

return nil
}

答案 7 :(得分:0)

可能无关,但对我来说问题是屏幕方向。在我的仅限肖像的应用程序中,我想要一个条形码扫描仪,它只检测屏幕中间水平线上的代码。我认为这会奏效:

CGRect(x: 0, y: 0.4, width: 1, height: 0.2)

相反,我不得不将 x 与 y 和宽度与高度进行切换

CGRect(x: 0.4, y: 0, width: 0.2, height: 1)

答案 8 :(得分:-3)

我写了以下内容:

videoPreviewLayer?.frame = view.layer.bounds
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill

这对我有用,但我还不知道为什么。