在ViewDidLoad中调用两个子视图

时间:2019-03-09 09:12:56

标签: swift arkit coreml

我是快速编程的新手,请原谅我的某些语言

我想在同一个主方法中运行两个方法 使用ARKit创建一个measure应用程序,并在其下创建一个COREML对象识别模块的人。当两者都运行良好时,对象识别可在单个快照图像上运行(我希望它是实时视频)。当线程停止时,对象检测模块可用于RT视频,但是ARKit不起作用。

请参见下面的示例代码

override func viewDidLoad() {
    super.viewDidLoad()


  // Set the view's delegate
    sceneView.delegate = self
    // Show statistics such as fps and timing information
    sceneView.showsStatistics = true

    //Background
    measurementLabel.frame = CGRect(x:0 , y:0, width: view.frame.size.width, height:100)
    measurementLabel.backgroundColor = .white

    measurementLabel.text = "0 inches"
    measurementLabel.textAlignment = .center
    sceneView.addSubview(measurementLabel)

    // Sets the amount of taps needed to trigger the handler
    let tapRecognizer = UITapGestureRecognizer(target: self, action: #selector(handleTap))
    tapRecognizer.numberOfTapsRequired = 1

    // Adds the handler to the scene view
    sceneView.addGestureRecognizer(tapRecognizer)
    //Lock taps after 4 inputs and save to array


    //COREML integration
    //Setting up the camera permissions
    let captureSession = AVCaptureSession()
    // captureSession.sessionPreset = .photo

    guard let captureDevice = AVCaptureDevice.default(for: .video) else {return}

    guard let input = try? AVCaptureDeviceInput(device: captureDevice) else {return}

    captureSession.addInput(input)
    captureSession.startRunning()

    //Vision processing layer
    let previewLayer = AVCaptureVideoPreviewLayer(session:  captureSession)
    view.layer.addSublayer(previewLayer)
    previewLayer.frame = view.frame

    let dataOutput = AVCaptureVideoDataOutput()
    dataOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
    captureSession.addOutput(dataOutput)


    //VNImageRequestHandler(cgImage:  , options: [:]).perform(<#T##requests: [VNRequest]##[VNRequest]#>)
    setupIdentifierConfidenceLabel()



}

我对您的问题是,在并行运行它们时我犯了什么错误。我知道这是一个非常小的情况,但是我似乎无法弄清楚

0 个答案:

没有答案