有效绘制字节数组流以在Android中显示的选项

时间:2017-01-14 15:02:42

标签: android opengl-es surfaceview textureview

简单来说,我需要做的就是在Android中显示视频帧的实时流(每帧都是YUV420格式)。我有一个回调函数,我接收单个帧作为字节数组。看起来像这样:

public void onFrameReceived(byte[] frame, int height, int width, int format) {
    // display this frame to surfaceview/textureview.
}

一个可行但很慢的选项是将字节数组转换为Bitmap并在SurfaceView上绘制到画布。在将来,我希望能够改变这个框架的亮度,对比度等,因此我希望我可以使用OpenGL-ES。我有效地做到这一点的其他选择是什么?

请记住,与CameraMediaPlayer类的实现不同,我无法使用camera.setPreviewTexture(surfaceTexture);将输出定向到surfaceview / textureview,因为我在C中使用Gstreamer接收单个帧

2 个答案:

答案 0 :(得分:2)

我正在为我的项目使用ffmpeg,但渲染YUV框架的原则应该与你自己相同。

例如,如果一帧是756 x 576,则Y帧将是该大小。 U和V框架是Y框架宽度和高度的一半,因此您必须确保考虑尺寸差异。

我不知道相机API,但是我从DVB源获得的帧具有宽度,并且每条线都有一个步幅。在帧中每行末尾添加像素。如果你的是相同的,那么在计算纹理坐标时就要考虑到这一点。

调整纹理坐标以考虑宽度和步幅(linesize):

float u = 1.0f / buffer->y_linesize * buffer->wid; // adjust texture coord for edge

我使用的顶点着色器采用0.0到1.0的屏幕坐标,但您可以更改这些以适应。它还采用纹理坐标和颜色输入。我已经使用了颜色输入,以便我可以添加淡入淡出等等。

顶点着色器:

#ifdef GL_ES
precision mediump float;
const float c1 = 1.0;
const float c2 = 2.0;
#else
const float c1 = 1.0f;
const float c2 = 2.0f;
#endif

attribute vec4 a_vertex;
attribute vec2 a_texcoord;
attribute vec4 a_colorin;
varying vec2 v_texcoord;
varying vec4 v_colorout;



void main(void)
{
    v_texcoord = a_texcoord;
    v_colorout = a_colorin;

    float x = a_vertex.x * c2 - c1;
    float y = -(a_vertex.y * c2 - c1);

    gl_Position = vec4(x, y, a_vertex.z, c1);
}

片段着色器,它采用三个均匀纹理,每个Y,U和V一个纹理,并转换为RGB。这也乘以从顶点着色器传入的颜色:

#ifdef GL_ES
precision mediump float;
#endif

uniform sampler2D u_texturey;
uniform sampler2D u_textureu;
uniform sampler2D u_texturev;
varying vec2 v_texcoord;
varying vec4 v_colorout;

void main(void)
{
    float y = texture2D(u_texturey, v_texcoord).r;
    float u = texture2D(u_textureu, v_texcoord).r - 0.5;
    float v = texture2D(u_texturev, v_texcoord).r - 0.5;
    vec4 rgb = vec4(y + 1.403 * v,
                    y - 0.344 * u - 0.714 * v,
                    y + 1.770 * u,
                    1.0);
    gl_FragColor = rgb * v_colorout;
}

使用的顶点位于:

float   x, y, z;    // coords
float   s, t;       // texture coords
uint8_t r, g, b, a; // colour and alpha

希望这有帮助!

编辑:

对于NV12格式,你仍然可以使用片段着色器,虽然我自己没有尝试过。它将交错的UV作为亮度-α通道或类似通道。

请点击此处了解一个人如何回答:https://stackoverflow.com/a/22456885/2979092

答案 1 :(得分:1)

我从SO和各种文章中获得了几个答案,再加上上面@WLGfx的答案来提出这个问题:

我创建了两个byte buffers,一个用于Y,另一个用于纹理的UV部分。然后使用

将字节缓冲区转换为纹理
public static int createImageTexture(ByteBuffer data, int width, int height, int format, int textureHandle) {
    if (GLES20.glIsTexture(textureHandle)) {
        return updateImageTexture(data, width, height, format, textureHandle);
    }
    int[] textureHandles = new int[1];

    GLES20.glGenTextures(1, textureHandles, 0);
    textureHandle = textureHandles[0];
    GlUtil.checkGlError("glGenTextures");

    // Bind the texture handle to the 2D texture target.
    GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle);

    // Configure min/mag filtering, i.e. what scaling method do we use if what we're rendering
    // is smaller or larger than the source image.
    GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
    GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
    GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
    GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
    GlUtil.checkGlError("loadImageTexture");

    // Load the data from the buffer into the texture handle.
    GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, format, width, height,
            0, format, GLES20.GL_UNSIGNED_BYTE, data);
    GlUtil.checkGlError("loadImageTexture");

    return textureHandle;
}

然后将这两个纹理作为普通2D纹理发送到glsl着色器:

precision highp float;
varying vec2 vTextureCoord;
uniform sampler2D sTextureY;
uniform sampler2D sTextureUV;
uniform float sBrightnessValue;
uniform float sContrastValue;
void main (void) {
float r, g, b, y, u, v;
    // We had put the Y values of each pixel to the R,G,B components by GL_LUMINANCE,
    // that's why we're pulling it from the R component, we could also use G or B
    y = texture2D(sTextureY, vTextureCoord).r;
    // We had put the U and V values of each pixel to the A and R,G,B components of the
    // texture respectively using GL_LUMINANCE_ALPHA. Since U,V bytes are interspread
    // in the texture, this is probably the fastest way to use them in the shader
    u = texture2D(sTextureUV, vTextureCoord).r - 0.5;
    v = texture2D(sTextureUV, vTextureCoord).a - 0.5;
    // The numbers are just YUV to RGB conversion constants
    r = y + 1.13983*v;
    g = y - 0.39465*u - 0.58060*v;
    b = y + 2.03211*u;
    // setting brightness/contrast
    r = r * sContrastValue + sBrightnessValue;
    g = g * sContrastValue + sBrightnessValue;
    b = b * sContrastValue + sBrightnessValue;
    // We finally set the RGB color of our pixel
    gl_FragColor = vec4(r, g, b, 1.0);
}