我正在使用新的ARKit3
,尤其是同时进行世界和面部跟踪。
我找不到很好的教程或示例。
我不知道如何开始。
我的目标是,我可以在世界跟踪中显示一张正在显示我的面部表情(从前置摄像头记录下来)的面部。
真的希望有人能帮助我。
//That's my setup for the configuration
private func setupFaceTracking() {
guard ARFaceTrackingConfiguration.isSupported else { return }
let configuration = ARWorldTrackingConfiguration()
configuration.isLightEstimationEnabled = true
configuration.userFaceTrackingEnabled = true
arView.session.run(configuration, options: [])
}
答案 0 :(得分:0)
到目前为止,您是正确的。您需要使用ARWorldTrackingConfiguration
来设置userFaceTrackingEnabled
,您可能还希望进行某种平面检测,以将任何经过面部修改的节点也锚定到场景中。如果您使用的是ARKit
Xcode模板,则可以这样说:
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
// Create a session configuration
let configuration = ARWorldTrackingConfiguration()
configuration.userFaceTrackingEnabled = true
configuration.isLightEstimationEnabled = true
configuration.planeDetection = [.horizontal]
// Run the view's session
sceneView.session.run(configuration)
}
要获取面部网格,您应该使用ARSCNFaceGeometry
,该{可以使用ARSCNView
的金属设备进行实例化,并作为属性存储在视图控制器中。例如:
lazy var faceGeometry: ARSCNFaceGeometry = {
let device = sceneView.device!
let maskGeometry = ARSCNFaceGeometry(device: device)!
maskGeometry.firstMaterial?.diffuse.contents = UIColor.white
return maskGeometry
}()
现在,只需将脸部几何图形放入场景中并响应脸部变化即可。
要获得场景中的几何图形,我建议使用点击手势,将节点放置在攻丝平面上。例如:
lazy var tapGesture: UITapGestureRecognizer = {
let gesture = UITapGestureRecognizer(target: self, action: #selector(didTap(_:)))
return gesture
}()
将其添加到viewDidLoad
中的视图中,就像sceneView.addGestureRecognizer(tapGesture)
一样。然后:
@objc func didTap(_ recognizer: UITapGestureRecognizer) {
let tapLocation = recognizer.location(in: sceneView)
let hitTestResults = sceneView.hitTest(tapLocation, types: .existingPlaneUsingExtent)
guard let hitTestResult = hitTestResults.first, hitTestResult.anchor is ARPlaneAnchor else { return }
// create anchor and add to session and wait for callback
let newAnchor = ARAnchor(transform: hitTestResult.worldTransform)
sceneView.session.add(anchor: newAnchor)
}
这将在点击位置添加ARAnchor
。添加锚点后,nodeForAnchor
将被调用,我们可以出售一个包含面部几何形状的节点。
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
// Make sure it's not an `ARPlaneAnchor`
guard !(anchor is ARPlaneAnchor) else { return SCNNode() }
// create empty node
let node = SCNNode()
// Add the stored face geometry as the node's geometry
node.geometry = faceGeometry
// Move node up to just above plane
node.position = SCNVector3(0.0, 0.15, 0.0)
// Create light so full topology is visible
// You could also just set `sceneView.autoenablesDefaultLighting = true` to not have to deal with lighting
let omni = SCNLight()
omni.type = .omni
omni.intensity = 3000
omni.color = UIColor.white
let omniNode = SCNNode()
omniNode.light = omni
omniNode.position = SCNVector3(0, 1, 0.5)
// Create node to contain face and light
let parentNode = SCNNode()
parentNode.addChildNode(node)
parentNode.addChildNode(omniNode)
// Return parent node
return parentNode
}
现在我们可以将面罩放到场景中了,这仅仅是响应更新的问题。为此,我们使用didUpdateNode
。
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
// If `ARFaceAnchor` update geometry
if let faceAnchor = anchor as? ARFaceAnchor {
faceGeometry.update(from: faceAnchor.geometry)
}
// If `ARPlaneAnchor` update plane geometry and color plane
else if let anchor = anchor as? ARPlaneAnchor,
let device = sceneView.device {
let plane = ARSCNPlaneGeometry(device: device)
plane?.update(from: anchor.geometry)
node.geometry = plane
// For debug, add a color to planes
node.geometry?.firstMaterial?.diffuse.contents = UIColor.blue.withAlphaComponent(0.8)
}
}
如果您执行所有这些操作,则应该得到类似的内容:
您需要确保该设备的脸部外观清晰。如果遮罩停止响应或节点未出现在水龙头上,请尝试将设备保持在其他位置,通常离您的脸更近或更远。