我正在使用ARKit和SceneKit进行透视测试。这个想法是在地面上显示平面3D模型时改善3D渲染。我已经打开了另一个即将解决的透视问题的门票。 (ARKit Perspective Rendering)
但是,在进行大量测试/ 3D显示后,我发现有时锚定3D模型时,其大小可能会有所不同...(宽度和长度) 我通常会显示一个16米长,1.5米宽的3D模型。您完全可以想象这会扭曲我的渲染。 我不知道为什么我的显示器在3D模型大小方面可能会有所不同。 也许是来自跟踪和我的测试环境。
下面是我用于将3D模型添加到场景中的代码:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
let imageAnchorPosition = imageAnchor.transform.columns.3
print("Image detected")
let modelName = "couloirV2"
//let modelName = "lamp"
guard let object = VirtualObject
.availableObjects
.filter({ $0.modelName == modelName })
.first else { fatalError("Cannot get model \(modelName)") }
print("Loading \(object)...")
self.sceneView.prepare([object], completionHandler: { _ in
self.updateQueue.async {
// Translate the object's position to the reference node position.
object.position.x = imageAnchorPosition.x
object.position.y = imageAnchorPosition.y
object.position.z = imageAnchorPosition.z
// Save the initial y value for slider handler function
self.tmpYPosition = object.position.y
// Match y node's orientation
object.orientation.y = node.orientation.y
print("Adding object to the scene")
// Prepare the object
object.load()
// Show origin axis
object.showObjectOrigin()
// Translate on z axis to match perfectly with image detected.
var translation = matrix_identity_float4x4
translation.columns.3.z += Float(referenceImage.physicalSize.height / 2)
object.simdTransform = matrix_multiply(object.simdTransform, translation)
self.sceneView.scene.rootNode.addChildNode(object)
self.virtualObjectInteraction.selectedObject = object
self.sceneView.addOrUpdateAnchor(for: object)
}
})
}