如何使用OpenGLES 2.0实时在libgdx中在背景上渲染Android的YUV-YV12相机图像?

时间:2017-05-17 17:13:57

标签: android opengl-es shader

这个问题涉及这个问题:How to render Android's YUV-NV21 camera image on the background in libgdx with OpenGLES 2.0 in real-time?

在作者给出的最佳答案中有很好的解释,但我对YV12而不是NV12有一些不同的问题。 (以下是一些规范:https://wiki.videolan.org/YUVhttps://www.fourcc.org/yuv.php

YUV-YV12怎么样? Y缓冲区是相同的,但UV没有被重新安装,所以我看起来像V和U的2个缓冲区。但是,然后,该怎么做才能将它们交给Shader?使用Pixmap.Format.Intensity纹理我认为,设置GL_LUMINANCE?

我不明白NV12" UVUV"使用GL_LUMINANCE和Pixmap格式使用GL_LUMINANCEALPHA将缓冲区转换为RGBA,RGB = V和A = U?

YV12正在使用" VVUU"缓冲区,所以很容易在V和U缓冲区中分割,但如何绑定它们并在着色器中获取u和v?

感谢您的帮助,此示例非常棒!但我需要一些不同的东西,为此我需要深入了解着色器绑定行为。

谢谢!

1 个答案:

答案 0 :(得分:1)

好的,我明白了: YUV-YV12是每像素12个字节:8位Y平面,后跟8位2x2子采样V和U平面。

根据这个答案(详细说明整个YUV-NV12到RGB着色器显示)https://stackoverflow.com/a/22456885/4311503让我们做一些改变。

因此,我们可以将缓冲区分成3个部分

    yBuffer = ByteBuffer.allocateDirect(640*480);
    uBuffer = ByteBuffer.allocateDirect(640*480/4); //We have (width/2*height/2) pixels, each pixel is 2 bytes
    vBuffer = ByteBuffer.allocateDirect(640*480/4); //We have (width/2*height/2) pixels, each pixel is 2 bytes

然后获取数据

yBuffer.put(frame.getData(), 0, size);
yBuffer.position(0);
//YV12 : Y(8 bytes) then V(2 bytes) then U(2 bytes)
vBuffer.put(frame.getData(), size, size/4);
vBuffer.position(0);
uBuffer.put(frame.getData(), size  * 5 / 4, size/4);
uBuffer.position(0);

现在,准备纹理:

yTexture = new Texture(640, 480, Pixmap.Format.Intensity); //A 8-bit per pixel format
uTexture = new Texture(640 / 2, 480 / 2, Pixmap.Format.Intensity); //A 8-bit per pixel format
vTexture = new Texture(640 / 2, 480 / 2, Pixmap.Format.Intensity); //A 8-bit per pixel format

然后更改绑定,因为我们现在使用3个纹理而不是2个:

//Set texture slot 0 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE0);
yTexture.bind();

//Y texture is (width*height) in size and each pixel is one byte;
//by setting GL_LUMINANCE, OpenGL puts this byte into R,G and B
//components of the texture
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE,
        640, 480, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE, yBuffer);

//Use linear interpolation when magnifying/minifying the texture to
//areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);

/*
* Prepare the UV channel texture
*/

//Set texture slot 1 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE1);
uTexture.bind();

Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE,
        640 / 2, 480 / 2, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE,
        uBuffer);

//Use linear interpolation when magnifying/minifying the texture to
//areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);

//Set texture slot 1 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE2);
vTexture.bind();

//UV texture is (width/2*height/2) Using GL_Luminance, each pixel will match a buffer component
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE,
        640 / 2, 480 / 2, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE,
        vBuffer);

//Use linear interpolation when magnifying/minifying the texture to
//areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);


shader.begin();

//Set the uniform y_texture object to the texture at slot 0
shader.setUniformi("y_texture", 0);

//Set the uniform uv_texture object to the texture at slot 1
shader.setUniformi("u_texture", 1);
shader.setUniformi("v_texture", 2);

mesh.render(shader, GL20.GL_TRIANGLES);

shader.end();

最后使用以下着色器(只更改了片段u和v纹理部分)

    //Our vertex shader code; nothing special
    String vertexShader =
            "attribute vec4 a_position;                         \n" +
                    "attribute vec2 a_texCoord;                         \n" +
                    "varying vec2 v_texCoord;                           \n" +

                    "void main(){                                       \n" +
                    "   gl_Position = a_position;                       \n" +
                    "   v_texCoord = a_texCoord;                        \n" +
                    "}                                                  \n";

    //Our fragment shader code; takes Y,U,V values for each pixel and calculates R,G,B colors,
    //Effectively making YUV to RGB conversion
    String fragmentShader =
            "#ifdef GL_ES                                       \n" +
                    "precision highp float;                             \n" +
                    "#endif                                             \n" +

                    "varying vec2 v_texCoord;                           \n" +
                    "uniform sampler2D y_texture;                       \n" +
                    "uniform sampler2D u_texture;                       \n" +
                    "uniform sampler2D v_texture;                       \n" +

                    "void main (void){                                  \n" +
                    "   float r, g, b, y, u, v;                         \n" +

                    //We had put the Y values of each pixel to the R,G,B components by GL_LUMINANCE,
                    //that's why we're pulling it from the R component, we could also use G or B
                    //see https://stackoverflow.com/questions/12130790/yuv-to-rgb-conversion-by-fragment-shader/17615696#17615696
                    //and https://stackoverflow.com/questions/22456884/how-to-render-androids-yuv-nv21-camera-image-on-the-background-in-libgdx-with-o
                    "   y = texture2D(y_texture, v_texCoord).r;         \n" +

                    //Since we use GL_LUMINANCE, each compoentn it on it own map
                    "   u = texture2D(u_texture, v_texCoord).r - 0.5;  \n" +
                    "   v = texture2D(v_texture, v_texCoord).r - 0.5;  \n" +


                    //The numbers are just YUV to RGB conversion constants
                    "   r = y + 1.13983*v;                              \n" +
                    "   g = y - 0.39465*u - 0.58060*v;                  \n" +
                    "   b = y + 2.03211*u;                              \n" +

                    //We finally set the RGB color of our pixel
                    "   gl_FragColor = vec4(r, g, b, 1.0);              \n" +
                    "}                                                  \n";

这是!