我正在关注有关CoreML的Youtube教程:使用机器学习进行实时相机对象检测 - Brian先生的Swift 4
captureSession.sessionPreset = .photo
在这一行中,错误说明:
类型“字符串”没有成员'photo'。
dataOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
另一个错误是'自我'部分。错误是:
无法将'ViewController'类型的值转换为预期的参数类型'AVCaptureVideoDataOutputSampleBufferDelegate!'
以下是整个代码:
import UIKit
import AVFoundation
import AVKit
import Vision
class ViewController: UIViewController, AVCaptureAudioDataOutputSampleBufferDelegate {
override func viewDidLoad() {
super.viewDidLoad()
// here is where we start up the camera
let captureSession = AVCaptureSession()
captureSession.sessionPreset = .photo
guard let captureDevice =
AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo) else { return }
guard let input = try? AVCaptureDeviceInput(device:
captureDevice) else { return }
captureSession.addInput(input)
captureSession.startRunning()
let previewLayer = AVCaptureVideoPreviewLayer(session:
captureSession)
view.layer.addSublayer(previewLayer!)
previewLayer!.frame = view.frame
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
captureSession.addOutput(dataOutput)
答案 0 :(得分:0)
我认为你得到了第二个错误的答案,因为第一个错误用下面的错误行替换
SWIFT 3
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
SWIFT 4
captureSession.sessionPreset = AVCaptureSession.Preset.photo