我有一个网格模型,并且使用VTK,从给定的摄像机位置(x,y,z)渲染了它的视图。我可以将其保存为RGB图像(640x480)但我还想保存深度图,其中每个像素存储来自相机的深度值。
我已尝试按照this example使用渲染窗口提供的Zbuffer
值。问题是Zbufer
只存储[0,1]范围内的值。相反,我正在尝试创建合成范围图像,其中我存储来自相机的每个像素的深度/距离。类似于Kinect生成的图像,我试图从网格模型的特定视点创建一个。
编辑 - 添加一些代码
我目前的代码:
加载网格
string mesh_filename = "mesh.ply";
vtkSmartPointer<vtkPLYReader> mesh_reader = read_mesh_ply(mesh_filename);
vtkSmartPointer<vtkPolyDataMapper> mapper = vtkSmartPointer<vtkPolyDataMapper>::New();
mapper->SetInputConnection(mesh_reader->GetOutputPort());
vtkSmartPointer<vtkActor> actor = vtkSmartPointer<vtkActor>::New();
actor->SetMapper(mapper);
vtkSmartPointer<vtkRenderer> renderer = vtkSmartPointer<vtkRenderer>::New();
vtkSmartPointer<vtkRenderWindow> renderWindow = vtkSmartPointer<vtkRenderWindow>::New();
renderWindow->AddRenderer(renderer);
renderWindow->SetSize(640, 480);
vtkSmartPointer<vtkRenderWindowInteractor> renderWindowInteractor = vtkSmartPointer<vtkRenderWindowInteractor>::New();
renderWindowInteractor->SetRenderWindow(renderWindow);
//Add the actors to the scene
renderer->AddActor(actor);
renderer->SetBackground(1, 1, 1);
制作相机并将其放置在某处
vtkSmartPointer<vtkCamera> camera = vtkSmartPointer<vtkCamera>::New();
renderer->SetActiveCamera(camera);
camera->SetPosition(0,0,650);
//Render and interact
renderWindow->Render();
从z缓冲区获取结果
double b = renderer->GetZ(320, 240);
在这个例子中,这给出了0.999995。由于值在[0,1]之间,我不知道如何解释这个,因为你可以看到我已经将相机设置为在z轴上650个单位,所以我假设在这个像素处的z距离(在渲染的RGB中的对象上应该接近650。
答案 0 :(得分:0)
此python代码段说明了如何将z缓冲区值转换为实际距离。非线性映射定义如下:
numerator = 2.0 * z_near * z_far
denominator = z_far + z_near - (2.0 * z_buffer_data_numpy - 1.0) * (z_far - z_near)
depth_buffer_data_numpy = numerator / denominator
以下是完整示例:
import vtk
import numpy as np
from vtk.util import numpy_support
import matplotlib.pyplot as plt
vtk_renderer = vtk.vtkRenderer()
vtk_render_window = vtk.vtkRenderWindow()
vtk_render_window.AddRenderer(vtk_renderer)
vtk_render_window_interactor = vtk.vtkRenderWindowInteractor()
vtk_render_window_interactor.SetRenderWindow(vtk_render_window)
vtk_render_window_interactor.Initialize()
source = vtk.vtkCubeSource()
mapper = vtk.vtkPolyDataMapper()
mapper.SetInputConnection(source.GetOutputPort())
actor = vtk.vtkActor()
actor.SetMapper(mapper)
actor.RotateX(60.0)
actor.RotateY(-35.0)
vtk_renderer.AddActor(actor)
vtk_render_window.Render()
active_vtk_camera = vtk_renderer.GetActiveCamera()
z_near, z_far = active_vtk_camera.GetClippingRange()
z_buffer_data = vtk.vtkFloatArray()
width, height = vtk_render_window.GetSize()
vtk_render_window.GetZbufferData(
0, 0, width - 1, height - 1, z_buffer_data)
z_buffer_data_numpy = numpy_support.vtk_to_numpy(z_buffer_data)
z_buffer_data_numpy = np.reshape(z_buffer_data_numpy, (-1, width))
z_buffer_data_numpy = np.flipud(z_buffer_data_numpy) # flipping along the first axis (y)
numerator = 2.0 * z_near * z_far
denominator = z_far + z_near - (2.0 * z_buffer_data_numpy - 1.0) * (z_far - z_near)
depth_buffer_data_numpy = numerator / denominator
non_depth_data_value = np.nan
depth_buffer_data_numpy[z_buffer_data_numpy == 1.0] = non_depth_data_value
print(np.nanmin(depth_buffer_data_numpy))
print(np.nanmax(depth_buffer_data_numpy))
plt.imshow(np.asarray(depth_buffer_data_numpy))
plt.show()
旁注:
在我的系统上,几次imshow
命令没有显示任何内容。重新运行脚本确实解决了该问题。
来源: