我试图弄清楚如何从2D点投射3D点。我希望能够给它一个深度值来投射到。任何人都有玛雅的例子吗?
谢谢!
这是我能做的最好的事情:
def screenToWorld(point2D=None,
depth=None,
viewMatrix=None,
projectionMatrix=None,
width=None,
height=None):
'''
@param point2D - 2D Point.
@param viewMatrix - MMatrix of modelViewMatrix (World inverse of camera.)
@param projectionMatrix - MMatrix of camera's projectionMatrix.
@param width - Resolution width of camera.
@param height - Resolution height of camera.
Returns worldspace MPoint.
'''
point3D = OpenMaya.MPoint()
point3D.x = (2.0 * (point2D[0] / width)) - 1.0
point3D.y = (2.0 * (point2D[1] / height)) - 1.0
viewProjectionMatrix = (viewMatrix * projectionMatrix)
point3D.z = viewProjectionMatrix(3, 2)
point3D.w = viewProjectionMatrix(3, 3)
point3D.x = point3D.x * point3D.w
point3D.y = point3D.y * point3D.w
point3D = point3D * viewProjectionMatrix.inverse()
return point3D
正如您所知,它不使用深度值。我不确定如何使用投影矩阵和viewMatrix进行合并。
非常感谢任何帮助! -Chris
答案 0 :(得分:0)
所以我认为我得到了一个解决方案:
import maya.OpenMaya as OpenMaya
def projectPoint(worldPnt, camPnt, depth):
'''
@param worldPnt - MPoint of point to project. (WorldSpace)
@param camPnt - MPoint of camera position. (WorldSpace)
@param depth - Float value of distance.
Returns list of 3 floats.
'''
#Get vector from camera to point and normalize it.
mVec_pointVec = worldPnt - camPnt
mVec_pointVec.normalize()
#Multiply it by the depth and the camera offset to it.
mVec_pointVec *= depth
mVec_pointVec += OpenMaya.MVector(camPnt.x, camPnt.y, camPnt.z)
return [mVec_pointVec.x, mVec_pointVec.y, mVec_pointVec.z]
我真的不需要将它转换为2d然后再转换为3d。我只需要从相机扩展矢量。