我想使用opencv mat数据作为opengl纹理。我正在开发一个Qt4.8应用程序(但通过qimage是我真正不需要的)扩展QGLWidget。但是出了点问题......
首先是截图中的问题,然后是我正在使用的代码。
如果我没有调整cv :: Mat的大小(从视频中抓取)一切正常。如果我将它缩放为维度的一半(scaleFactor = 2),一切都可以。如果比例因子是2.8或2.9 ..一切都很好。但是..在某种程度上,因为它是错误的。
这里有一个漂亮的红色背景的截图,用于理解opengl quad维度:
scaleFactor = 2
scaleFactor = 2.8
scaleFactor = 3
scaleFactor = 3.2
现在是paint方法的代码。我找到了将cv :: Mat数据复制到this nice blog post的gl纹理中的代码。
void VideoViewer::paintGL()
{
glClear (GL_COLOR_BUFFER_BIT);
glClearColor (1.0, 0.0, 0.0, 1.0);
glEnable(GL_BLEND);
// Use a simple blendfunc for drawing the background
glBlendFunc(GL_ONE, GL_ZERO);
if (!cvFrame.empty()) {
glEnable(GL_TEXTURE_2D);
GLuint tex = matToTexture(cvFrame);
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_QUADS);
glTexCoord2f(1, 1); glVertex2f(0, cvFrame.size().height);
glTexCoord2f(1, 0); glVertex2f(0, 0);
glTexCoord2f(0, 0); glVertex2f(cvFrame.size().width, 0);
glTexCoord2f(0, 1); glVertex2f(cvFrame.size().width, cvFrame.size().height);
glEnd();
glDeleteTextures(1, &tex);
glDisable(GL_TEXTURE_2D);
glFlush();
}
}
GLuint VideoViewer::matToTexture(cv::Mat &mat, GLenum minFilter, GLenum magFilter, GLenum wrapFilter)
{
// http://r3dux.org/2012/01/how-to-convert-an-opencv-cvmat-to-an-opengl-texture/
// Generate a number for our textureID's unique handle
GLuint textureID;
glGenTextures(1, &textureID);
// Bind to our texture handle
glBindTexture(GL_TEXTURE_2D, textureID);
// Catch silly-mistake texture interpolation method for magnification
if (magFilter == GL_LINEAR_MIPMAP_LINEAR ||
magFilter == GL_LINEAR_MIPMAP_NEAREST ||
magFilter == GL_NEAREST_MIPMAP_LINEAR ||
magFilter == GL_NEAREST_MIPMAP_NEAREST)
{
std::cout << "VideoViewer::matToTexture > "
<< "You can't use MIPMAPs for magnification - setting filter to GL_LINEAR"
<< std::endl;
magFilter = GL_LINEAR;
}
// Set texture interpolation methods for minification and magnification
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, minFilter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, magFilter);
// Set texture clamping method
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrapFilter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrapFilter);
// Set incoming texture format to:
// GL_BGR for CV_CAP_OPENNI_BGR_IMAGE,
// GL_LUMINANCE for CV_CAP_OPENNI_DISPARITY_MAP,
// Work out other mappings as required ( there's a list in comments in main() )
GLenum inputColourFormat = GL_BGR;
if (mat.channels() == 1)
{
inputColourFormat = GL_LUMINANCE;
}
// Create the texture
glTexImage2D(GL_TEXTURE_2D, // Type of texture
0, // Pyramid level (for mip-mapping) - 0 is the top level
GL_RGB, // Internal colour format to convert to
mat.cols, // Image width i.e. 640 for Kinect in standard mode
mat.rows, // Image height i.e. 480 for Kinect in standard mode
0, // Border width in pixels (can either be 1 or 0)
inputColourFormat, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
GL_UNSIGNED_BYTE, // Image data type
mat.ptr()); // The actual image data itself
return textureID;
}
以及如何加载和缩放cv :: Mat:
void VideoViewer::retriveScaledFrame()
{
video >> cvFrame;
cv::Size s = cv::Size(cvFrame.size().width/scaleFactor, cvFrame.size().height/scaleFactor);
cv::resize(cvFrame, cvFrame, s);
}
有时候图像会被正确渲染而不是..为什么?可以肯定的是,opencv和opengl之间的像素存储顺序不匹配有问题。但是,如何解决它?为什么有时候可以,有时候没有?
答案 0 :(得分:0)
是的,它是内存中像素存储的问题。 OpenCV和OpenGL可以以不同的方式存储像素,我必须更好地理解它是如何工作的。
在OpenGL中,您可以使用glPixelStorei
和GL_UNPACK_ALIGNMENT
,GL_UNPACK_ROW_LENGTH
指定这些参数。
可以找到一个很好的答案here。