我正在为DirectX 11开发OBJ加载程序。 在OBJ格式中,正方形(两个三角形)看起来像这样:
v 0 0 0
v 0 1 0
v 1 1 0
v 1 0 0
f 1 2 3
f 1 3 4
首先,顶点数据由v
给出,然后由f
给出。所以我只是将顶点读入顶点缓冲区并将索引读入索引缓冲区。但现在我需要计算像素着色器的法线。我可以在使用索引渲染时以某种方式存储FACES的正常数据,还是必须创建没有索引的顶点缓冲区? (因为那时我可以存储每个顶点的法线数据,因为每个顶点仅用于1个面)
答案 0 :(得分:4)
通常的方法是为面部的所有3个顶点存储相同的法向量。像这样:
Vertex
{
Vector3 position;
Vector3 normal;
}
std::vector<Vertex> vertices;
std::vector<uint32_t> indices;
for(each face f)
{
Vector3 faceNormal = CalculateFaceNormalFromPositions(f); // Generate normal for given face number `f`;
for(each vertex v)
{
Vertex vertex;
vertex.position = LoadPosition(f, v); // Load position from OBJ based on face index (f) and vertex index (v);
vertex.normal = faceNormal;
vertices.push_back(vertex);
indices.push_back(GetPosIndex()); // only position index from OBJ file needed
}
}
注意:通常你会想要使用顶点法线而不是面法线,因为顶点法线允许应用更好看的光照算法(每像素照明):
for(each face f)
{
for(each vertex v)
{
Vertex vertex;
vertex.position = LoadPosition(f, v);
vertex.normal = ...precalculated somewhere...
vertices.push_back(vertex);
}
}
注意2:通常您需要从资产文件中读取预先计算的法线,而不是在运行时计算它:
for(each face f)
{
for(each vertex v)
{
Vertex vertex;
vertex.position = LoadPosition(f, v);
vertex.normal = LoadNormal(f, v);
vertices.push_back(vertex);
}
}
.obj格式允许存储每顶点法线)。谷歌示例:
# cube.obj
#
g cube
# positions
v 0.0 0.0 0.0
v 0.0 0.0 1.0
v 0.0 1.0 0.0
v 0.0 1.0 1.0
v 1.0 0.0 0.0
v 1.0 0.0 1.0
v 1.0 1.0 0.0
v 1.0 1.0 1.0
# normals
vn 0.0 0.0 1.0
vn 0.0 0.0 -1.0
vn 0.0 1.0 0.0
vn 0.0 -1.0 0.0
vn 1.0 0.0 0.0
vn -1.0 0.0 0.0
# faces: indices of position / texcoord(empty) / normal
f 1//2 7//2 5//2
f 1//2 3//2 7//2
f 1//6 4//6 3//6
f 1//6 2//6 4//6
f 3//3 8//3 7//3
f 3//3 4//3 8//3
f 5//5 7//5 8//5
f 5//5 8//5 6//5
f 1//4 5//4 6//4
f 1//4 6//4 2//4
f 2//1 6//1 8//1
f 2//1 8//1 4//1
C ++中的示例代码(未经测试)
struct Vector3{ float x, y, z; };
struct Face
{
uint32_t position_ids[3];
uint32_t normal_ids[3];
};
struct Vertex
{
Vector3 position;
Vector3 normal;
};
std::vector<Vertex> vertices; // Your future vertex buffer
std::vector<uint32_t> indices; // Your future index buffer
void ParseOBJ(std::vector<Vector3>& positions, std::vector <Vector3>& normals, std::vector<Face>& faces) { /*TODO*/ }
void LoadOBJ(const std::wstring& filename, std::vector<Vertex>& vertices, std::vector<uint32_t>& indices)
{
// after parsing obj file
// you will have positions, normals
// and faces (which contains indices for positions and normals)
std::vector<Vector3> positions;
std::vector<Vector3> normals;
std::vector<Face> faces;
ParseOBJ(positions, normals, faces);
for (auto itFace = faces.begin(); itFace != faces.end(); ++itFace) // for each face
{
for (uint32_t i = 0; i < 3; ++i) // for each face vertex
{
uint32_t position_id = itFace->position_ids[i]; // just for short writing later
uint32_t normal_id = itFace->normal_ids[i];
Vertex vertex;
vertex.position = positions[position_id];
vertex.normal = normals[normal_id];
indices.push_back(position_id); // Note: only position's indices
vertices.push_back(vertex);
}
}
}
请注意,在顶点内合并法线数据后,您将不再需要法线索引。因此,法线不会被索引(并且两个相等的法线可以存储在不同的顶点中,这是浪费空间)。但您仍然可以使用索引渲染,因为位置已编入索引。
我必须说,当然,现代GPU的可编程管道允许更棘手的事情: