有没有办法检查我是否触摸了屏幕上的物体?据我所知,HitResult类允许我检查是否触摸了识别的和maped表面。但我想检查一下,我触摸了那个表面上设置的对象。
答案 0 :(得分:4)
ARCore并没有真正的对象概念,所以我们无法直接提供。我建议以ray-sphere tests为出发点。
但是,我可以帮助获取光线本身(添加到HelloArActivity
):
/**
* Returns a world coordinate frame ray for a screen point. The ray is
* defined using a 6-element float array containing the head location
* followed by a normalized direction vector.
*/
float[] screenPointToWorldRay(float xPx, float yPx, Frame frame) {
float[] points = new float[12]; // {clip query, camera query, camera origin}
// Set up the clip-space coordinates of our query point
// +x is right:
points[0] = 2.0f * xPx / mSurfaceView.getMeasuredWidth() - 1.0f;
// +y is up (android UI Y is down):
points[1] = 1.0f - 2.0f * yPx / mSurfaceView.getMeasuredHeight();
points[2] = 1.0f; // +z is forwards (remember clip, not camera)
points[3] = 1.0f; // w (homogenous coordinates)
float[] matrices = new float[32]; // {proj, inverse proj}
// If you'll be calling this several times per frame factor out
// the next two lines to run when Frame.isDisplayRotationChanged().
mSession.getProjectionMatrix(matrices, 0, 1.0f, 100.0f);
Matrix.invertM(matrices, 16, matrices, 0);
// Transform clip-space point to camera-space.
Matrix.multiplyMV(points, 4, matrices, 16, points, 0);
// points[4,5,6] is now a camera-space vector. Transform to world space to get a point
// along the ray.
float[] out = new float[6];
frame.getPose().transformPoint(points, 4, out, 3);
// use points[8,9,10] as a zero vector to get the ray head position in world space.
frame.getPose().transformPoint(points, 8, out, 0);
// normalize the direction vector:
float dx = out[3] - out[0];
float dy = out[4] - out[1];
float dz = out[5] - out[2];
float scale = 1.0f / (float) Math.sqrt(dx*dx + dy*dy + dz*dz);
out[3] = dx * scale;
out[4] = dy * scale;
out[5] = dz * scale;
return out;
}
如果您每帧多次调用此处,请参阅有关getProjectionMatrix
和invertM
来电的评论。
答案 1 :(得分:1)
除Mouse Picking with Ray Casting外,参见Ian的回答,另一种常用的技术是选择缓冲区,详细解释(使用C ++代码)here
3D采摘背后的技巧非常简单。我们将附上一个跑步 索引到每个三角形并具有FS输出的索引 像素所属的三角形。最终的结果是我们得到了一个 “颜色”缓冲区,实际上不包含颜色。相反,每个人 被一些原语覆盖的像素我们得到了这个的索引 原始。当在窗口上单击鼠标时,我们将回读 该索引(根据鼠标的位置)并渲染 选择三角形红色。通过在我们的过程中结合深度缓冲区 保证当几个图元重叠相同的像素时 我们得到最顶层的原始索引(最接近相机)。
简而言之: