能帮我解决我的问题吗?我必须进行从图像坐标到世界坐标的反投影,因此相对于地图。到目前为止,我所做的就是使用内部函数将图像投影到相机中,然后使用外部值更改坐标系。所以我从图像到相机,再从相机到自我车辆坐标系。这是一种汽车环境,因此相机在汽车上,其值是相对于车辆给出的,而车辆值是相对于地图给出的(Nuscenes数据集设置)。我无法使用激光雷达信息,并且Nuscenes不提供GPS坐标。我也不能使用OpenCV。我将分享我的代码(我什至尝试遵循本文,但结果是相同的http://www.ce.unipr.it/people/bertozzi/publications/cr/iv2004-tracking-clermont.pdf):
cam_intrinsic = np.asmatrix(row[4])
center_x = cam_intrinsic[0, 2]
center_y = cam_intrinsic[0, 5]
focal_x = cam_intrinsic[0, 0]
focal_y = cam_intrinsic[0, 4]
averageheight = 1.75
depth = (averageheight*focal_x)/pixelcoord[2] #<- this is the height of the bounding box
# set z=depth from camera length and solve the other equations for x and y
cameracoord = np.array([((pixelcoord[0] - center_x)*depth) / focal_x,
((pixelcoord[1] - center_y)*depth) / focal_y,
depth]).T
# Coordinate system origin in meters: x, y, z
translation = np.asmatrix(row[2])
#print(translation)
# system orientation as quaternion: w,x,y,z
orientation = np.asmatrix(row[3])
#print(orientation)
x = orientation[0, 1]
y = orientation[0, 2]
z = orientation[0, 3]
w = orientation[0, 0]
quaternions = (x, y, z, w)
#transformation from quaternion to roll,pitch,yaw value
euler = tf.euler_from_quaternion(quaternions)
roll = euler[0]
pitch = euler[1]
yaw = euler[2]
spatial_orientation = ct.SpatialOrientation(elevation_m=translation[0, 2], tilt_deg=pitch, roll_deg=roll, heading_deg=yaw, pos_x_m=translation[0, 0], pos_y_m=translation[0, 1])
spacecoord = spatial_orientation.spaceFromCamera([cameracoord])
ego_translation = np.asmatrix(data[2])
#print(translation)
# system orientation as quaternion: w,x,y,z
ego_orientation = np.asmatrix(data[3])
#print(orientation)
ego_x = ego_orientation[0, 1]
ego_y = ego_orientation[0, 2]
ego_z = ego_orientation[0, 3]
ego_w = ego_orientation[0, 0]
ego_quaternions = (ego_x, ego_y, ego_z, ego_w)
# transformation from quaternion to roll,pitch,yaw value
ego_euler = tf.euler_from_quaternion(ego_quaternions)
ego_roll = ego_euler[0]
ego_pitch = ego_euler[1]
ego_yaw = ego_euler[2]
global_orientation = ct.SpatialOrientation(elevation_m=ego_translation[0, 2], tilt_deg=ego_pitch,
roll_deg=ego_roll, heading_deg=ego_yaw, pos_x_m=ego_translation[0, 0],
pos_y_m=ego_translation[0, 1])
globalcoord = np.asmatrix(global_orientation.spaceFromCamera([spacecoord]))
那么,我想念什么?因为有了这些设置点才被投影在平面图上,而不反映汽车周围的分布。这是一个情节:
Left: Groundtruth - Right: My projection
谢谢。