我已经阅读了大量关于如何通过在屏幕上拖动对象来移动对象的StackOverflow答案。有些人使用针对.featurePoints的命中测试,有些人使用手势翻译或只是跟踪对象的lastPosition。但老实说..没有人按照每个人的期望它的方式工作。
对.featurePoints进行测试只会使对象四处跳转,因为拖动手指时不会总是碰到一个特征点。我不明白为什么每个人都在暗示这一点。
像这样的解决方案:Dragging SCNNode in ARKit Using SceneKit
但物体并没有真正跟随你的手指,当你走几步或改变物体或相机的角度......然后尝试移动物体.. x,z都被倒置了......完全有道理这样做。
我真的想要移动与Apple Demo一样好的对象,但我看看Apple的代码......并且非常奇怪且过于复杂我甚至无法理解。他们移动物体如此美化的技术甚至不像每个人在网上提出的那样接近。 https://developer.apple.com/documentation/arkit/handling_3d_interaction_and_ui_controls_in_augmented_reality
必须采用更简单的方法。
答案 0 :(得分:2)
我在克拉森的答案中添加了一些想法。拖动节点时,我注意到有些滞后。我发现节点无法跟随手指的运动。
为使节点移动更加平稳,我添加了一个变量来跟踪当前正在移动的节点,并将位置设置为触摸的位置。
var selectedNode: SCNNode?
此外,我设置一个.categoryBitMask
值以指定要编辑(移动)的节点的类别。默认位掩码值为1。
之所以设置类别位掩码,是为了区分不同种类的节点,并指定要选择的节点(来回移动等)。
enum CategoryBitMask: Int {
case categoryToSelect = 2 // 010
case otherCategoryToSelect = 4 // 100
// you can add more bit masks below . . .
}
然后,我在UILongPressGestureRecognizer
中添加了一个viewDidLoad()
。
let longPressRecognizer = UILongPressGestureRecognizer(target: self, action: #selector(longPressed))
self.sceneView.addGestureRecognizer(longPressRecognizer)
以下是我用来检测长按的UILongPressGestureRecognizer
,它引发了节点的拖动。
首先,从location
获得触摸recognizerView
@objc func longPressed(recognizer: UILongPressGestureRecognizer) {
guard let recognizerView = recognizer.view as? ARSCNView else { return }
let touch = recognizer.location(in: recognizerView)
当检测到长按时,以下代码将运行一次。
在这里,我们执行hitTest
以选择已被触摸的节点。请注意,在这里,我们指定一个.categoryBitMask
选项以仅选择以下类别的节点:CategoryBitMask.categoryToSelect
// Runs once when long press is detected.
if recognizer.state == .began {
// perform a hitTest
let hitTestResult = self.sceneView.hitTest(touch, options: [SCNHitTestOption.categoryBitMask: CategoryBitMask.categoryToSelect])
guard let hitNode = hitTestResult.first?.node else { return }
// Set hitNode as selected
self.selectedNode = hitNode
以下代码将定期运行,直到用户松开手指为止。
在这里,我们执行另一个hitTest
以获取您希望节点移动的平面。
// Runs periodically after .began
} else if recognizer.state == .changed {
// make sure a node has been selected from .began
guard let hitNode = self.selectedNode else { return }
// perform a hitTest to obtain the plane
let hitTestPlane = self.sceneView.hitTest(touch, types: .existingPlane)
guard let hitPlane = hitTestPlane.first else { return }
hitNode.position = SCNVector3(hitPlane.worldTransform.columns.3.x,
hitNode.position.y,
hitPlane.worldTransform.columns.3.z)
确保从屏幕上移开手指时取消选择节点。
// Runs when finger is removed from screen. Only once.
} else if recognizer.state == .ended || recognizer.state == .cancelled || recognizer.state == .failed{
guard let hitNode = self.selectedNode else { return }
// Undo selection
self.selectedNode = nil
}
}
答案 1 :(得分:1)
有一些迟来的答案,但我知道我在解决此问题时也遇到了一些问题。最终,我想出了一种方法,只要调用手势识别器就执行两次单独的命中测试即可。
首先,我对3d对象执行一次命中测试,以检测我当前是否在按下对象(因为如果未指定任何选项,则将获得按下FeaturePoints,planes等的结果)。我通过使用.categoryBitMask
的{{1}}值来做到这一点。
请记住,您必须事先为对象节点及其所有子节点分配正确的SCNHitTestOption
值,才能使点击测试生效。我声明一个可用于此目的的枚举:
.categoryBitMask
从关于我发布的here的enum BodyType : Int {
case ObjectModel = 2;
}
值的问题的答案中可以明显看出,重要的是要考虑为位掩码分配哪些值。
下面是我与.categoryBitMask
结合使用的代码
为了选择我当前按下的对象:
UILongPressGestureRecognizer
此后,我进行了第二次命中测试,以便找到我要按的飞机。
如果您不希望将对象移动到平面边缘之外,可以使用guard let recognizerView = recognizer.view as? ARSCNView else { return }
let touch = recognizer.location(in: recognizerView)
let hitTestResult = self.sceneView.hitTest(touch, options: [SCNHitTestOption.categoryBitMask: BodyType.ObjectModel.rawValue])
guard let modelNodeHit = hitTestResult.first?.node else { return }
类型,如果您想沿检测到的平面无限地移动对象,则可以使用.existingPlaneUsingExtent
类型。>
.existingPlane
当我尝试 var planeHit : ARHitTestResult!
if recognizer.state == .changed {
let hitTestPlane = self.sceneView.hitTest(touch, types: .existingPlane)
guard hitTestPlane.first != nil else { return }
planeHit = hitTestPlane.first!
modelNodeHit.position = SCNVector3(planeHit.worldTransform.columns.3.x,modelNodeHit.position.y,planeHit.worldTransform.columns.3.z)
}else if recognizer.state == .ended || recognizer.state == .cancelled || recognizer.state == .failed{
modelNodeHit.position = SCNVector3(planeHit.worldTransform.columns.3.x,modelNodeHit.position.y,planeHit.worldTransform.columns.3.z)
}
时,我做了GitHub repo。如果您想在实践中看到我的方法,可以查看一下,但是我并不是出于其他人使用它的目的而设计的,因此它还没有完成。另外,开发分支应支持具有更多childNodes的对象的某些功能。
答案 2 :(得分:0)
简短答案: 要像在Apple演示项目中那样获得良好而流畅的拖动效果,您必须像在Apple演示项目(处理3D交互)中那样进行操作。另一方面,我也同意您的看法,如果您初次查看该代码,可能会造成混淆。始终并且从每个位置或视角计算放置在地板平面上的物体的正确运动一点都不容易。这是一个复杂的代码结构,可以产生出色的拖动效果。苹果在实现这一目标方面做得很出色,但对我们来说并没有太容易。
完整答案: 精简AR Interaction模板来满足您的需要,这会带来噩梦-但是,如果您投入足够的时间,它也可以工作。如果您想从头开始,则基本上开始使用一个通用的快速ARKit / SceneKit Xcode模板(一个包含太空飞船的模板)。
您还将需要Apple提供的整个AR Interaction Template Project。 (链接包含在SO问题中) 在最后,您应该能够拖动一个称为VirtualObject的东西,它实际上是一个特殊的SCNNode。此外,您还将拥有一个漂亮的Focus Square,它可以用于任何目的-例如最初放置对象或添加地板或墙壁。 (一些用于拖动效果和焦点正方形用法的代码是合并或链接在一起的-在没有焦点正方形的情况下进行操作实际上会更加复杂)
开始使用: 将以下文件从AR Interaction模板复制到您的空项目中:
将UIGestureRecognizerDelegate添加到ViewController类定义中,如下所示:
class ViewController: UIViewController, ARSCNViewDelegate, UIGestureRecognizerDelegate {
将此代码添加到ViewController.swift的“定义”部分中,紧接在viewDidLoad之前:
// MARK: for the Focus Square
// SUPER IMPORTANT: the screenCenter must be defined this way
var focusSquare = FocusSquare()
var screenCenter: CGPoint {
let bounds = sceneView.bounds
return CGPoint(x: bounds.midX, y: bounds.midY)
}
var isFocusSquareEnabled : Bool = true
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
/// The tracked screen position used to update the `trackedObject`'s position in `updateObjectToCurrentTrackingPosition()`.
private var currentTrackingPosition: CGPoint?
/**
The object that has been most recently intereacted with.
The `selectedObject` can be moved at any time with the tap gesture.
*/
var selectedObject: VirtualObject?
/// The object that is tracked for use by the pan and rotation gestures.
private var trackedObject: VirtualObject? {
didSet {
guard trackedObject != nil else { return }
selectedObject = trackedObject
}
}
/// Developer setting to translate assuming the detected plane extends infinitely.
let translateAssumingInfinitePlane = true
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
在viewDidLoad中,在设置场景之前,添加以下代码:
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
let panGesture = ThresholdPanGesture(target: self, action: #selector(didPan(_:)))
panGesture.delegate = self
// Add gestures to the `sceneView`.
sceneView.addGestureRecognizer(panGesture)
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
在ViewController.swift的末尾添加以下代码:
// MARK: - Pan Gesture Block
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
@objc
func didPan(_ gesture: ThresholdPanGesture) {
switch gesture.state {
case .began:
// Check for interaction with a new object.
if let object = objectInteracting(with: gesture, in: sceneView) {
trackedObject = object // as? VirtualObject
}
case .changed where gesture.isThresholdExceeded:
guard let object = trackedObject else { return }
let translation = gesture.translation(in: sceneView)
let currentPosition = currentTrackingPosition ?? CGPoint(sceneView.projectPoint(object.position))
// The `currentTrackingPosition` is used to update the `selectedObject` in `updateObjectToCurrentTrackingPosition()`.
currentTrackingPosition = CGPoint(x: currentPosition.x + translation.x, y: currentPosition.y + translation.y)
gesture.setTranslation(.zero, in: sceneView)
case .changed:
// Ignore changes to the pan gesture until the threshold for displacment has been exceeded.
break
case .ended:
// Update the object's anchor when the gesture ended.
guard let existingTrackedObject = trackedObject else { break }
addOrUpdateAnchor(for: existingTrackedObject)
fallthrough
default:
// Clear the current position tracking.
currentTrackingPosition = nil
trackedObject = nil
}
}
// - MARK: Object anchors
/// - Tag: AddOrUpdateAnchor
func addOrUpdateAnchor(for object: VirtualObject) {
// If the anchor is not nil, remove it from the session.
if let anchor = object.anchor {
sceneView.session.remove(anchor: anchor)
}
// Create a new anchor with the object's current transform and add it to the session
let newAnchor = ARAnchor(transform: object.simdWorldTransform)
object.anchor = newAnchor
sceneView.session.add(anchor: newAnchor)
}
private func objectInteracting(with gesture: UIGestureRecognizer, in view: ARSCNView) -> VirtualObject? {
for index in 0..<gesture.numberOfTouches {
let touchLocation = gesture.location(ofTouch: index, in: view)
// Look for an object directly under the `touchLocation`.
if let object = virtualObject(at: touchLocation) {
return object
}
}
// As a last resort look for an object under the center of the touches.
// return virtualObject(at: gesture.center(in: view))
return virtualObject(at: (gesture.view?.center)!)
}
/// Hit tests against the `sceneView` to find an object at the provided point.
func virtualObject(at point: CGPoint) -> VirtualObject? {
// let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]
let hitTestResults = sceneView.hitTest(point, options: [SCNHitTestOption.categoryBitMask: 0b00000010, SCNHitTestOption.searchMode: SCNHitTestSearchMode.any.rawValue as NSNumber])
// let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]
// let hitTestResults = sceneView.hitTest(point, options: hitTestOptions)
return hitTestResults.lazy.compactMap { result in
return VirtualObject.existingObjectContainingNode(result.node)
}.first
}
/**
If a drag gesture is in progress, update the tracked object's position by
converting the 2D touch location on screen (`currentTrackingPosition`) to
3D world space.
This method is called per frame (via `SCNSceneRendererDelegate` callbacks),
allowing drag gestures to move virtual objects regardless of whether one
drags a finger across the screen or moves the device through space.
- Tag: updateObjectToCurrentTrackingPosition
*/
@objc
func updateObjectToCurrentTrackingPosition() {
guard let object = trackedObject, let position = currentTrackingPosition else { return }
translate(object, basedOn: position, infinitePlane: translateAssumingInfinitePlane, allowAnimation: true)
}
/// - Tag: DragVirtualObject
func translate(_ object: VirtualObject, basedOn screenPos: CGPoint, infinitePlane: Bool, allowAnimation: Bool) {
guard let cameraTransform = sceneView.session.currentFrame?.camera.transform,
let result = smartHitTest(screenPos,
infinitePlane: infinitePlane,
objectPosition: object.simdWorldPosition,
allowedAlignments: [ARPlaneAnchor.Alignment.horizontal]) else { return }
let planeAlignment: ARPlaneAnchor.Alignment
if let planeAnchor = result.anchor as? ARPlaneAnchor {
planeAlignment = planeAnchor.alignment
} else if result.type == .estimatedHorizontalPlane {
planeAlignment = .horizontal
} else if result.type == .estimatedVerticalPlane {
planeAlignment = .vertical
} else {
return
}
/*
Plane hit test results are generally smooth. If we did *not* hit a plane,
smooth the movement to prevent large jumps.
*/
let transform = result.worldTransform
let isOnPlane = result.anchor is ARPlaneAnchor
object.setTransform(transform,
relativeTo: cameraTransform,
smoothMovement: !isOnPlane,
alignment: planeAlignment,
allowAnimation: allowAnimation)
}
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
添加一些焦点平方代码
// MARK: - Focus Square (code by Apple, some by me)
func updateFocusSquare(isObjectVisible: Bool) {
if isObjectVisible {
focusSquare.hide()
} else {
focusSquare.unhide()
}
// Perform hit testing only when ARKit tracking is in a good state.
if let camera = sceneView.session.currentFrame?.camera, case .normal = camera.trackingState,
let result = smartHitTest(screenCenter) {
DispatchQueue.main.async {
self.sceneView.scene.rootNode.addChildNode(self.focusSquare)
self.focusSquare.state = .detecting(hitTestResult: result, camera: camera)
}
} else {
DispatchQueue.main.async {
self.focusSquare.state = .initializing
self.sceneView.pointOfView?.addChildNode(self.focusSquare)
}
}
}
并添加一些控件功能:
func hideFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: true) } } // to hide the focus square
func showFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: false) } } // to show the focus square
从VirtualObjectARView.swift中复制!整个功能smartHitTest添加到ViewController.swift(因此它们存在两次)
func smartHitTest(_ point: CGPoint,
infinitePlane: Bool = false,
objectPosition: float3? = nil,
allowedAlignments: [ARPlaneAnchor.Alignment] = [.horizontal, .vertical]) -> ARHitTestResult? {
// Perform the hit test.
let results = sceneView.hitTest(point, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane, .estimatedHorizontalPlane])
// 1. Check for a result on an existing plane using geometry.
if let existingPlaneUsingGeometryResult = results.first(where: { $0.type == .existingPlaneUsingGeometry }),
let planeAnchor = existingPlaneUsingGeometryResult.anchor as? ARPlaneAnchor, allowedAlignments.contains(planeAnchor.alignment) {
return existingPlaneUsingGeometryResult
}
if infinitePlane {
// 2. Check for a result on an existing plane, assuming its dimensions are infinite.
// Loop through all hits against infinite existing planes and either return the
// nearest one (vertical planes) or return the nearest one which is within 5 cm
// of the object's position.
let infinitePlaneResults = sceneView.hitTest(point, types: .existingPlane)
for infinitePlaneResult in infinitePlaneResults {
if let planeAnchor = infinitePlaneResult.anchor as? ARPlaneAnchor, allowedAlignments.contains(planeAnchor.alignment) {
if planeAnchor.alignment == .vertical {
// Return the first vertical plane hit test result.
return infinitePlaneResult
} else {
// For horizontal planes we only want to return a hit test result
// if it is close to the current object's position.
if let objectY = objectPosition?.y {
let planeY = infinitePlaneResult.worldTransform.translation.y
if objectY > planeY - 0.05 && objectY < planeY + 0.05 {
return infinitePlaneResult
}
} else {
return infinitePlaneResult
}
}
}
}
}
// 3. As a final fallback, check for a result on estimated planes.
let vResult = results.first(where: { $0.type == .estimatedVerticalPlane })
let hResult = results.first(where: { $0.type == .estimatedHorizontalPlane })
switch (allowedAlignments.contains(.horizontal), allowedAlignments.contains(.vertical)) {
case (true, false):
return hResult
case (false, true):
// Allow fallback to horizontal because we assume that objects meant for vertical placement
// (like a picture) can always be placed on a horizontal surface, too.
return vResult ?? hResult
case (true, true):
if hResult != nil && vResult != nil {
return hResult!.distance < vResult!.distance ? hResult! : vResult!
} else {
return hResult ?? vResult
}
default:
return nil
}
}
您可能会在复制的函数中看到有关hitTest的错误。像这样纠正它:
hitTest... // which gives an Error
sceneView.hitTest... // this should correct it
实施渲染器updateAtTime函数并添加以下行:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
// For the Focus Square
if isFocusSquareEnabled { showFocusSquare() }
self.updateObjectToCurrentTrackingPosition() // *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
}
最后为Focus Square添加一些帮助功能
func hideFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: true) } } // to hide the focus square
func showFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: false) } } // to show the focus square
这时,您仍然可能在导入的文件中看到大约十二个错误和警告,这可能是在Swift 5中执行的,并且您有一些Swift 4文件时发生的。只需让Xcode更正错误即可。 (所有有关重命名一些代码语句的信息,Xcode最为清楚)
进入VirtualObject.swift并搜索以下代码块:
if smoothMovement {
let hitTestResultDistance = simd_length(positionOffsetFromCamera)
// Add the latest position and keep up to 10 recent distances to smooth with.
recentVirtualObjectDistances.append(hitTestResultDistance)
recentVirtualObjectDistances = Array(recentVirtualObjectDistances.suffix(10))
let averageDistance = recentVirtualObjectDistances.average!
let averagedDistancePosition = simd_normalize(positionOffsetFromCamera) * averageDistance
simdPosition = cameraWorldPosition + averagedDistancePosition
} else {
simdPosition = cameraWorldPosition + positionOffsetFromCamera
}
用以下单行代码注释或替换整个块:
simdPosition = cameraWorldPosition + positionOffsetFromCamera
这时,您应该能够编译项目并在设备上运行它。您应该会看到太空飞船和一个已经可以工作的黄色对焦方块。
要开始放置一个对象,您可以拖动它,您需要一些功能来创建一个所谓的VirtualObject,如我一开始所说。
使用此示例函数进行测试(将其添加到视图控制器中的某个位置):
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
if focusSquare.state != .initializing {
let position = SCNVector3(focusSquare.lastPosition!)
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
let testObject = VirtualObject() // give it some name, when you dont have anything to load
testObject.geometry = SCNCone(topRadius: 0.0, bottomRadius: 0.2, height: 0.5)
testObject.geometry?.firstMaterial?.diffuse.contents = UIColor.red
testObject.categoryBitMask = 0b00000010
testObject.name = "test"
testObject.castsShadow = true
testObject.position = position
sceneView.scene.rootNode.addChildNode(testObject)
}
}
注意:您要在平面上拖动的所有内容都必须使用VirtualObject()而不是SCNNode()进行设置。关于VirtualObject的所有其他内容均与SCNNode保持不变
(您还可以添加一些常见的SCNNode扩展,例如通过其名称加载场景的扩展名-在引用导入的模型时很有用)
玩得开心!