ARSCNView会自动将设备摄像头的实时视频输入渲染为场景背景。但我尝试手动将场景背景内容设置为其中一个iPhone X相机。我可以成功地将sceneView.scene.background.contents设置为颜色,图像等,但不能设置为前置或后置摄像头。我已经尝试将sceneView.scene.background.contents设置为AVCaptureDevice,AVCaptureDeviceInput和AVCaptureVideoPreviewLayer。我还在另一个视图中尝试了一个独立的AVCaptureVideoPreviewLayer。这些都不会同时导致摄像机视频和人脸跟踪。
我觉得它可能与ARFaceTrackingConfiguration()有关,自动使用前置摄像头,无法同时显示单独的摄像头视频?我认为iOS 11.2中存在某种错误,但我在11.3中没有成功。根据我的方法,我得到以下一个:
Swift4 / iOS11 / iPhone X
直接设置scene.background.contents:
@IBOutlet weak var sceneView: ARSCNView!
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
sceneView.delegate = self
sceneView.session.run(ARFaceTrackingConfiguration())
// wait for scene and camera to activate
DispatchQueue.main.asyncAfter(deadline: .now() + 4.0) {
let captureDevice = AVCaptureDevice.default(. builtInWideAngleCamera, for: .video, position: .front)!
self.sceneView.scene.background.contents = captureDevice
}
}
使用单独的视频图层:
@IBOutlet weak var sceneView: ARSCNView!
var captureSession = AVCaptureSession()
var inp: AVCaptureDeviceInput!
var videoPreviewLayer: AVCaptureVideoPreviewLayer!
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
sceneView.delegate = self
sceneView.scene.background.contents = UIColor.yellow
sceneView.session.run(ARFaceTrackingConfiguration())
if let cam = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front) {
do { inp = try AVCaptureDeviceInput(device: cam) }
catch { fatalError("Failed to get device input") }
} else { fatalError("Failed to get camera device") }
captureSession.addInput(inp)
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.frame = view.bounds
videoPreviewLayer?.videoGravity = .resizeAspectFill
DispatchQueue.main.asyncAfter(deadline: .now() + 4.0) {
self.captureSession.startRunning()
self.view.layer.addSublayer(self.videoPreviewLayer!)
}
}
答案 0 :(得分:0)
似乎一次只能使用一台摄像机?
注意:媒体捕获不支持同时捕获iOS设备上的前置摄像头和后置摄像头。