我正在创建一个iPad应用程序,其中一项功能是扫描二维码。我有QR扫描部分工作,但我的问题是iPad屏幕非常大,我将扫描一张纸的小QR码,同时可以看到许多QR码。我想指定显示器的较小区域是唯一可以实际捕获QR码的区域,以便用户更容易扫描他们想要的特定QR码。
我目前已经制作了一个带有红色边框的临时UIView,它以页面为中心,作为我希望用户扫描QR码的示例。它看起来像这样:
我已经全神贯注地找到了如何定位AVCaptureVideoPreviewLayer的特定区域以收集QR码数据的答案,我找到的建议是使用" rectOfInterest"使用AVCaptureMetadataOutput。我试图这样做,但是当我将rectOfInterest设置为与我用于正确显示的UIView相同的坐标和大小时,我无法扫描/识别任何QR码。有人可以告诉我为什么可扫描区域与看到的UIView的位置不匹配,如何让rectOfInterest在我添加到屏幕的红色边框内?
以下是我目前使用的扫描功能的代码:
func startScan() {
// Get an instance of the AVCaptureDevice class to initialize a device object and provide the video
// as the media type parameter.
let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
// Get an instance of the AVCaptureDeviceInput class using the previous device object.
var error:NSError?
let input: AnyObject! = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: &error)
if (error != nil) {
// If any error occurs, simply log the description of it and don't continue any more.
println("\(error?.localizedDescription)")
return
}
// Initialize the captureSession object.
captureSession = AVCaptureSession()
// Set the input device on the capture session.
captureSession?.addInput(input as! AVCaptureInput)
// Initialize a AVCaptureMetadataOutput object and set it as the output device to the capture session.
let captureMetadataOutput = AVCaptureMetadataOutput()
captureSession?.addOutput(captureMetadataOutput)
// calculate a centered square rectangle with red border
let size = 300
let screenWidth = self.view.frame.size.width
let xPos = (CGFloat(screenWidth) / CGFloat(2)) - (CGFloat(size) / CGFloat(2))
let scanRect = CGRect(x: Int(xPos), y: 150, width: size, height: size)
// create UIView that will server as a red square to indicate where to place QRCode for scanning
scanAreaView = UIView()
scanAreaView?.layer.borderColor = UIColor.redColor().CGColor
scanAreaView?.layer.borderWidth = 4
scanAreaView?.frame = scanRect
// Set delegate and use the default dispatch queue to execute the call back
captureMetadataOutput.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
captureMetadataOutput.metadataObjectTypes = [AVMetadataObjectTypeQRCode]
captureMetadataOutput.rectOfInterest = scanRect
// Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer.
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
view.layer.addSublayer(videoPreviewLayer)
// Start video capture.
captureSession?.startRunning()
// Initialize QR Code Frame to highlight the QR code
qrCodeFrameView = UIView()
qrCodeFrameView?.layer.borderColor = UIColor.greenColor().CGColor
qrCodeFrameView?.layer.borderWidth = 2
view.addSubview(qrCodeFrameView!)
view.bringSubviewToFront(qrCodeFrameView!)
// Add a button that will be used to close out of the scan view
videoBtn.setTitle("Close", forState: .Normal)
videoBtn.setTitleColor(UIColor.blackColor(), forState: .Normal)
videoBtn.backgroundColor = UIColor.grayColor()
videoBtn.layer.cornerRadius = 5.0;
videoBtn.frame = CGRectMake(10, 30, 70, 45)
videoBtn.addTarget(self, action: "pressClose:", forControlEvents: .TouchUpInside)
view.addSubview(videoBtn)
view.addSubview(scanAreaView!)
}
更新
我不认为这是重复的原因是因为引用的其他帖子在Objective-C中,而我的代码在Swift中。对于我们这些刚接触iOS的人来说,翻译这两者并不容易。此外,引用帖子的答案并未显示解决其问题的代码中的实际更新。他留下了关于必须使用metadataOutputRectOfInterestForRect
方法转换矩形坐标的一个很好的解释,但我似乎仍然无法使这个方法起作用,因为我不清楚如何在没有示例的情况下如何工作。
答案 0 :(得分:2)
let metadataOutput = AVCaptureMetadataOutput()
metadataOutput.rectOfInterest = convertRectOfInterest(rect: scanRect)
查看其他来源(https://www.jianshu.com/p/8bb3d8cb224e)后, convertRectOfInterest函数有一个小错误,返回字段应为:
return CGRect(x: newY, y: newX, width: newHeight, height: newWidth)
其中x和y,宽度和高度输入应互换以使其正常工作。
答案 1 :(得分:1)
整个上午用import audio1 from './audio1.mp3';
import audio2 from './audio2.mp3';
import audio3 from './audio3.mp3';
import audio4 from './audio4.mp3';
import audio5 from './audio5.mp3';
const audios = {
'audio1': new Audio(audio1),
'audio2': new Audio(audio2),
'audio3': new Audio(audio3),
'audio4': new Audio(audio4),
'audio5': new Audio(audio5)
};
let currentlyPlaying = null;
const play = (name) => {
if (currentlyPlaying) {
audios[currentlyPlaying].pause();
}
audios[name].play();
currentlyPlaying = name;
};
const pause = (name) => {
currentlyPlaying = null;
audios[name].pause();
};
export default {
pause,
play
};
方法战斗后,我厌倦了它并决定自己编写转换。
metedataOutputRectOfInterestForRect
注意:我有一个带方框的图片视图,可向用户显示扫描位置,请务必使用func convertRectOfInterest(rect: CGRect) -> CGRect {
let screenRect = self.view.frame
let screenWidth = screenRect.width
let screenHeight = screenRect.height
let newX = 1 / (screenWidth / rect.minX)
let newY = 1 / (screenHeight / rect.minY)
let newWidth = 1 / (screenWidth / rect.width)
let newHeight = 1 / (screenHeight / rect.height)
return CGRect(x: newX, y: newY, width: newWidth, height: newHeight)
}
而不是imageView.frame
,以便在屏幕上显示正确的位置。< / p>
这对我来说已经成功。
答案 2 :(得分:1)
您需要将UIView
的坐标中表示的矩形转换为AVCaptureVideoPreviewLayer
的坐标系:
captureMetadataOutput.rectOfInterest = videoPreviewLayer.metadataOutputRectConverted(fromLayerRect: scanRect)
答案 3 :(得分:-1)
这对我有用。
extension AVCaptureVideoPreviewLayer {
func rectOfInterestConverted(parentRect: CGRect, fromLayerRect: CGRect) -> CGRect {
let parentWidth = parentRect.width
let parentHeight = parentRect.height
let newX = (parentWidth - fromLayerRect.maxX)/parentWidth
let newY = 1 - (parentHeight - fromLayerRect.minY)/parentHeight
let width = 1 - (fromLayerRect.minX/parentWidth + newX)
let height = (fromLayerRect.maxY/parentHeight) - newY
return CGRect(x: newX, y: newY, width: width, height: height)
}
}
用法:
if let rect = videoPreviewLayer?.rectOfInterestConverted(parentRect: self.view.frame, fromLayerRect: scanAreaView.frame) {
captureMetadataOutput.rectOfInterest = rect
}
答案 4 :(得分:-1)
metadataOutput.rectOfInterest = previewLayer.metadataOutputRectConverted(fromLayerRect: yourView.frame)
previewLayer 是 AVCaptureVideoPreviewLayer