写入索引缓冲区时崩溃

时间:2015-04-18 07:01:37

标签: c++ opengl vbo

我目前正在使用C ++ 11 / SDL2 / OpenGL为Windows,Mac和Linux编写an engine

它在Mac和Linux上运行良好,但我在Windows + Nvidia桌面上遇到了令人讨厌的崩溃(我唯一的其他Windows环境是虚拟机,它不支持我的OpenGL功能集)

我有两个朋友在不同的Windows + AMD设备上测试它,所以我的问题似乎与Nvidia的驱动程序以及我拥有它们的当前状态有关,这意味着SSCCE可能无法帮助。
创建顶点缓冲区,并创建以下索引缓冲区使用在某个未知时间点正常工作。也许在驾驶员更新之前......

作为参考,我的Buffer类如下:

static GLenum GetGLBufferType( BufferType bufferType ) {
    switch ( bufferType ) {
    case BufferType::Vertex: {
        return GL_ARRAY_BUFFER;
    } break;

    case BufferType::Index: {
        return GL_ELEMENT_ARRAY_BUFFER;
    } break;

    case BufferType::Uniform: {
        return GL_UNIFORM_BUFFER;
    } break;

    default: {
        return GL_NONE;
    } break;
    }
}

GLuint Buffer::GetID( void ) const {
    return id;
}

Buffer::Buffer( BufferType bufferType, const void *data, size_t size )
: type( GetGLBufferType( bufferType ) ), offset( 0 ), size( size )
{
    glGenBuffers( 1, &id );
    glBindBuffer( type, id );
    glBufferData( type, size, data, GL_STREAM_DRAW );

    if ( bufferType == BufferType::Uniform ) {
        glGetIntegerv( GL_UNIFORM_BUFFER_OFFSET_ALIGNMENT, reinterpret_cast<GLint *>( &alignment ) );
    }
    else {
        alignment = 16;
    }
}

Buffer::~Buffer() {
    glDeleteBuffers( 1, &id );
}

void *Buffer::Map( void ) {
    Bind();
    return glMapBufferRange( type, 0, size, GL_MAP_WRITE_BIT );
}

BufferMemory Buffer::MapDiscard( size_t allocSize ) {
    Bind();

    allocSize = (allocSize + alignment - 1) & ~(alignment - 1);
    if ( (offset + allocSize) > size ) {
        // We've run out of memory. Orphan the buffer and allocate some more memory
        glBufferData( type, size, nullptr, GL_STREAM_DRAW );
        offset = 0;
    }

    BufferMemory result;
    result.devicePtr = glMapBufferRange(
        type,
        offset,
        allocSize,
        GL_MAP_WRITE_BIT | GL_MAP_UNSYNCHRONIZED_BIT | GL_MAP_INVALIDATE_RANGE_BIT
    );
    result.offset = offset;
    result.size = allocSize;
    offset += allocSize;
    return result;
}

void Buffer::Unmap( void ) {
    glUnmapBuffer( type );
}

void Buffer::BindRange( int index, size_t rangeOffset, size_t rangeSize ) const {
    if ( !rangeSize ) {
        rangeSize = size - rangeOffset;
    }

    glBindBufferRange( type, index, id, rangeOffset, rangeSize );
}

void Buffer::Bind( void ) const {
    glBindBuffer( type, id );
}

创建索引缓冲区的代码如下所示:

static const uint16_t quadIndices[6] = { 0, 2, 1, 1, 2, 3 };
quadsIndexBuffer = new Buffer( BufferType::Index, quadIndices, sizeof(quadIndices) );

崩溃发生在glBufferData( type, size, data, GL_STREAM_DRAW );id是4
type是34963又名GL_ELEMENT_ARRAY_BUFFER
size是12 data是quadIndices

如果我尝试创建索引缓冲区而不填充它,那么映射它并写入它就像这样:

quadsIndexBuffer = new Buffer( BufferType::Index, nullptr, sizeof(quadIndices) );
BufferMemory bufferMem = quadsIndexBuffer->MapDiscard( 6 * sizeof(uint16_t) );
uint16_t *indexBuffer = static_cast<uint16_t *>( bufferMem.devicePtr );
for ( size_t i = 0u; i < 6; i++ ) {
    *indexBuffer++ = quadIndices[i];
}
quadsIndexBuffer->Unmap();

然后崩溃发生在glMapBufferRange

内的Buffer::MapDiscard

映射方法背后的基本原理是因为尝试映射繁忙的缓冲区会导致忙等待。

// Usage strategy is map-discard. In other words, keep appending to the buffer
// until we run out of memory. At this point, orphan the buffer by re-allocating
// a buffer of the same size and access bits.

我试过寻找答案,但我发现的唯一解决方案与传递错误的大小或glBufferData的参数顺序错误有关。没用。

1 个答案:

答案 0 :(得分:1)

似乎通过禁用GL_DEBUG_OUTPUT_SYNCHRONOUS_ARB崩溃不再显示自己,我的程序的行为是正确的。

我想我认为这是一个驱动程序错误是正确的。我会尝试将其转发给开发团队。

作为参考,这是针对Nvidia GTX 680驱动程序版本350.12的OpenGL 3.1 启用了glewExperimental,并设置了以下OpenGL上下文标志:核心配置文件,前向兼容,调试