检查顶点在相机视图中是否可见以及渲染或遮挡

时间:2019-12-26 22:05:03

标签: python neural-network blender raycasting bounding-box

我正在处理机器学习任务,并且试图使用Blender生成合成图像作为神经网络的训练数据集。为此,我必须在渲染的图像中找到对象的边界框。

到目前为止,我的代码很大程度上基于建议的in this thread,但是这并不关心顶点是可见还是被其他对象遮挡了。所需的结果确实与所解释的here完全相同。我已经尝试过那里给出的建议,但是没有用。我不明白这是因为我给ray_cast函数输入错误(因为bpy API确实很糟糕)还是仅仅是因为该函数的性能不佳,正如我在其他地方读到的那样。现在,我的代码是:

import bpy
import numpy as np

def boundingbox(scene, camera, obj, limit = 0.3):
    #  Get the inverse transformation matrix.
    matrix = camera.matrix_world.normalized().inverted()
    #  Create a new mesh data block, using the inverse transform matrix to undo any transformations.
    dg = bpy.context.evaluated_depsgraph_get()
    #    eval_obj = bpy.context.object.evaluated_get(dg)
    eval_obj = obj.evaluated_get(dg)
    mesh = eval_obj.to_mesh()
    mesh.transform(obj.matrix_world)
    mesh.transform(matrix)

    #  Get the world coordinates for the camera frame bounding box, before any transformations.
    frame = [-v for v in camera.data.view_frame(scene=scene)[:3]]
    origin = camera.location
    lx = []
    ly = []

    for v in mesh.vertices:
        co_local = v.co
        z = -co_local.z
        direction =  (co_local - origin)


        result = scene.ray_cast(view_layer=bpy.context.window.view_layer, origin=origin,
                                      direction= direction) # interested only in the first return value
        intersection = result[0]
        met_obj = result[4]
        if intersection:
            if met_obj.type == 'CAMERA':
                intersection = False


        if z <= 0.0 or (intersection == True and (result[1] - co_local).length > limit):
            #  Vertex is behind the camera or another object; ignore it.
            continue
        else:
            # Perspective division
            frame = [(v / (v.z / z)) for v in frame]

        min_x, max_x = frame[1].x, frame[2].x
        min_y, max_y = frame[0].y, frame[1].y

        x = (co_local.x - min_x) / (max_x - min_x)
        y = (co_local.y - min_y) / (max_y - min_y)

        lx.append(x)
        ly.append(y)

    eval_obj.to_mesh_clear()

    #  Image is not in view if all the mesh verts were ignored
    if not lx or not ly:
        return None

    min_x = np.clip(min(lx), 0.0, 1.0)
    min_y = np.clip(min(ly), 0.0, 1.0)
    max_x = np.clip(max(lx), 0.0, 1.0)
    max_y = np.clip(max(ly), 0.0, 1.0)

    #  Image is not in view if both bounding points exist on the same side
    if min_x == max_x or min_y == max_y:
        return None

    # Figure out the rendered image size
    render = scene.render
    fac = render.resolution_percentage * 0.01
    dim_x = render.resolution_x * fac
    dim_y = render.resolution_y * fac

    # return box in the form (top left x, top left y),(width, height)
    return (
        (round(min_x * dim_x),  # X
         round(dim_y - max_y * dim_y)),  # Y
        (round((max_x - min_x) * dim_x),  # Width
         round((max_y - min_y) * dim_y))  # Height
    )

我也尝试过将光线从顶点投射到摄影机位置(而不是相反),并按照here的说明使用小立方体解决方法,但无济于事。可以请人帮我弄清楚如何正确执行此操作或提出其他策略吗?

2 个答案:

答案 0 :(得分:0)

我不得不解决一个非常相似的问题

这是我使用的代码

def BoundingBoxFinal(obj,cam):
from bpy_extras.object_utils import world_to_camera_view
scene = bpy.context.scene
# needed to rescale 2d coordinates
render = scene.render
render_scale = scene.render.resolution_percentage / 100
res_x = render.resolution_x *render_scale
res_y = render.resolution_y *render_scale
# use generator expressions () or list comprehensions []
mat = obj.matrix_world
verts = [vert.co for vert in obj.data.vertices]
for i in range(len(verts)):
    verts[i] = obj.matrix_world @ verts[i]


coords_2d = [world_to_camera_view(scene, cam, coord) for coord in verts]

# 2d data printout:
rnd = lambda i: round(i)


X_max = max(coords_2d[0])
Y_max = max(coords_2d[1])
X_min = min(coords_2d[0])
Y_min = min(coords_2d[1])

verts_2d =[]
for x, y, distance_to_lens in coords_2d:
    verts_2d.append(tuple((rnd(res_x*x), rnd(res_y-res_y*y))))

Y_max = max(verts_2d, key = lambda i : i[1])[1]
X_max = max(verts_2d, key = lambda i : i[0])[0]
Y_min = min(verts_2d, key = lambda i : i[1])[1]
X_min = min(verts_2d, key = lambda i : i[0])[0]

verts_2d.clear()

return(
    X_min,
    Y_min,
    X_max,
    Y_max,
    obj.data.name.split('.')[0]
)

答案 1 :(得分:0)

我试图在场景的渲染图像中找到对象的遮挡级别。 我所做的是,我创建了一种大小与渲染分辨率相同的地图(简单2D数组)。然后我做了如下...

for each object in the scene:
    for each vertex of the object:
        (x', y', z') = convert the vertex from local(obj.data.vertices[i].co) to world view
        (x, y, z) = convert the world view vertex(x', y', ') to the 2d camera view
        # this x, y is the 2d coordinates and that z is the distance of the point from camera
        update the 2d array with the id(corresponding to the object closer to the camera)

最后,您可以检查顶点(对象obj的一部分)是否可见,您所需要做的就是在最终渲染图像中投影该顶点,例如( x,y)。现在,我们需要检查一下map / 2D数组的索引(x,y)是否具有obj的ID。如果是这样,则意味着在渲染图像中obj的顶点在坐标(x,y)上是可见的,如果不是,则该map / 2d数组在(x,y)处具有其他对象的id。在这种情况下,可以得出结论,在坐标(x,y)的物体图像中,对象obj的顶点被场景中的其他对象覆盖(该特定顶点和摄影机之间存在另一个对象) )。

这只是对数字的巧妙操纵,您将得到想要的东西。 如果您需要更多说明/代码,请在评论中告诉我。 还请让我知道你们中是否有人发现这种方法有问题。 您的评论将不胜感激