使用SDL2,openGL ES 2.0(GLSL 1.0)和Freetype呈现字体

时间:2018-06-20 19:19:01

标签: c fonts rendering sdl opengl-es-2.0

这是我在这里的第一篇文章。我正在开发一款旨在在树莓派3板上使用的应用程序。我的目标是使用gpu而不是cpu成功绘制图形,从而使后面的图形足以满足应用程序其他部分所要求的cpu处理需求,例如数学计算,I / O Comms,I2C Comms,虚拟串行端口Comms等

到目前为止,在遵循https://keasigmadelta.com/store/gles3sdl2-tutorial/的论文教程之后,我已经可以达到使用SDL2和opengles 2.0通过glDrawArrays()绘制线条的程度。我成功地在启用了GL驱动程序的rapiberry pi 3中支持了openGL ES 2.0和GLSL 1.0。

我正在尝试使用SDL2渲染字体,该字体提供了很好的功能,可让您定义颜色和大小以及最终渲染的其他参数。尽管使用SDL2函数TTF_RenderText_Blended(...)渲染字体给我带来了完美的结果,合理的时间和我的Intel Core 2 Quad core 9550 2.8GHz上的cpu开销,但我对raspberry pi 3却不能这么说。在raspi3上使用glDrawArrays()的gpu给我留下了令人印象深刻的结果,几乎5-10%的cpu负载,每秒随机选择的顶点之间绘制1000条线,每秒> 50次。但是我需要使用gpu而不是cpu来渲染字体,因为在raspi3上进行纯SDL2字体渲染会导致50-60%的cpu负载,这使我没有空间进行其他数学计算等。

在互联网上搜索2-3天后,我决定遵循以下指南:https://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_Text_Rendering_01,但其中不包括SDL2。尽管我能够编译代码,但是我在下面给出了以下内容,没有出现错误:g ++ -Wall main.cpp -I / usr / include / freetype2 -lSDL2 -lSDL2_ttf -lGL -lGLEW -lfreetype -o main。您可以排除某些选项,因为它们引用了正在使用的其他库,例如ttf。

请记住,尽管没有引用,但该代码已经可以与glDrawArrays()一起正常工作。

你们中有些人会发现我的一些可笑的说法,我知道。

我从以下代码中得到的是glClearColor(0.5,0.5,0.5,1)设置的颜色的屏幕;这是灰色的,而这正是我得到的。 什么都没发生。当涉及调试时,我在函数renderText()中放置了一个SDL_Log()。如果您注意到了,display()函数包含2个对renderText()的调用。 2个SDL_Log()函数在应用程序窗口中提供了以下信息:

INFO: Debug info: glyph w: 0, glyph rows: 0
INFO: Debug info: glyph w: 25, glyph rows: 36

没有更多我可以给您的信息。您能帮我完成字体渲染吗?

肯定是一件事。我必须坐下来学习一些opengles和GLSL。

顶点着色器内容

#version 100

attribute vec4 coord;
varying vec2 texcoord;

void main(void) {
  gl_Position = vec4(coord.xy, 0, 1);
  texcoord = coord.zw;
}

片段着色器内容

#version 100

#ifdef GL_ES
  precision highp float;
#endif

varying vec2 texcoord;
uniform sampler2D tex;
uniform vec4 color;

void main(void) {
  gl_FragColor = vec4(1, 1, 1, texture2D(tex, texcoord).r) * color;
}

编译为:

g++ -Wall main.cpp -I/usr/include/freetype2 -lSDL2 -lGL -lGLEW -lfreetype -o main

源代码已在Linux Mint 18.3(ubuntu 16.04)中进行了测试,如下所示:

// Standard libs
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <time.h>

// Add -lSDL2
// Found at /usr/local/include/SDL2
#include <SDL2/SDL.h>

// Add -lGL and -lGLEW to compiler
// Found at /usr/include/GL
#include <GL/glew.h>
#include <GL/glu.h>

// Add -lfreetype and -I/usr/include/freetype2 to compiler
// Found at /usr/include/freetype2/freetype/config/
#include <ft2build.h>
#include FT_FREETYPE_H

SDL_Window *window=NULL;
SDL_GLContext openGLContext;
FT_Library ft=NULL;
FT_Face face;
GLuint shaderProg;

typedef struct {
  float position[2];
} Vertex;

// The function render_text() takes 5 arguments: the string to render, the x and y start coordinates, 
// and the x and y scale parameters. The last two should be chosen such that one glyph pixel corresponds 
// to one screen pixel. Let's look at the display() function which draws the whole screen:
void render_text(const char *text, float x, float y, float sx, float sy) {
  const char *p;

  FT_GlyphSlot g = face->glyph;

  SDL_Log("Debug info: glyph w: %d, glyph rows: %d", g->bitmap.width, g->bitmap.rows);

  for(p = text; *p; p++) {

    // If FT_Load_Char() returns a non-zero value then the glyph in *p could not be loaded
    if(FT_Load_Char(face, *p, FT_LOAD_RENDER))
        continue;

    glTexImage2D(
      GL_TEXTURE_2D,
      0,
      GL_RED,
      g->bitmap.width,
      g->bitmap.rows,
      0,
      GL_RED,
      GL_UNSIGNED_BYTE,
      g->bitmap.buffer
    );

    float x2 = x + g->bitmap_left * sx;
    float y2 = -y - g->bitmap_top * sy;
    float w = g->bitmap.width * sx;
    float h = g->bitmap.rows * sy;

    GLfloat box[4][4] = {
        {x2,     -y2    , 0, 0},
        {x2 + w, -y2    , 1, 0},
        {x2,     -y2 - h, 0, 1},
        {x2 + w, -y2 - h, 1, 1},
    };

    glBufferData(GL_ARRAY_BUFFER, sizeof box, box, GL_DYNAMIC_DRAW);
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

    x += (g->advance.x/64) * sx;
    y += (g->advance.y/64) * sy;
  }
}


void display(void) {

  // I had to add the three next lines of code because the 1st param to glUniform4fv() was unreferenced in the Wiki tutorial.
  // After looking at: https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glGetUniformLocation.xhtml
  // I concluded that i had to use glGetUniformLocation() to get the uLocation but i had no idea what to pass as 2nd param to glGetUniformLocation()
  // Docs say regarding the 2nd param: Points to a null terminated string containing the name of the uniform variable whose location is to be queried.
  // There is a var called uniform_location in the fragment shader so i had i pick one from there. I know i sound ridiculus...
  int w=0, h=0;
  SDL_GetWindowSize(window, &w, &h);
  GLint uLocation = glGetUniformLocation(shaderProg, "sample2D");

  // Clear the invisible buffer with the color specified
  glClearColor(0.5, 0.5, 0.5, 1);
  glClear(GL_COLOR_BUFFER_BIT);

  GLfloat black[4] = {0, 0, 0, 1};
  glUniform4fv(uLocation, 1, black);

  float sx = 2.0 / (float)w;
  float sy = 2.0 / (float)h;

  render_text("The Quick Brown Fox Jumps Over The Lazy Dog", -1 + 8 * sx,   1 - 50 * sy, sx, sy);
  render_text("The Misaligned Fox Jumps Over The Lazy Dog", -1 + 8.5 * sx, 1 - 100.5 * sy, sx, sy);

  // I replaced glutSwapBuffers(); with the following
  SDL_GL_SwapWindow(window);
}

void shaderProgDestroy(GLuint shaderProg) {
  glDeleteProgram(shaderProg);
}

/** Destroys a shader.
*/
static void shaderDestroy(GLuint shaderID) {
glDeleteShader(shaderID);
}

/** Gets the file's length.
*
* @param file the file
*
* @return size_t the file's length in bytes
*/
static size_t fileGetLength(FILE *file) {
  size_t length;
  size_t currPos = ftell(file);
  fseek(file, 0, SEEK_END);
  length = ftell(file);
  // Return the file to its previous position
  fseek(file, currPos, SEEK_SET);
  return length;
}


/** Loads and compiles a shader from a file.
*
* This will print any errors to the console.
*
* @param filename the shader's filename
* @param shaderType the shader type (e.g., GL_VERTEX_SHADER)
*
* @return GLuint the shader's ID, or 0 if failed
*/
static GLuint shaderLoad(const char *filename, GLenum shaderType) {
  FILE *file = fopen(filename, "r");
  if (!file) {
    SDL_Log("Can't open file: %s\n", filename);
    return 0;
  }
  size_t length = fileGetLength(file);
  // Alloc space for the file (plus '\0' termination)
  GLchar *shaderSrc = (GLchar*)calloc(length + 1, 1);
  if (!shaderSrc) {
    SDL_Log("Out of memory when reading file: %s\n", filename);
    fclose(file);
    file = NULL;
    return 0;
  }
  fread(shaderSrc, 1, length, file);
  // Done with the file
  fclose(file);
  file = NULL;

  // Create the shader
  GLuint shader = glCreateShader(shaderType);

  glShaderSource(shader, 1, (const GLchar**)&shaderSrc, NULL);
  free(shaderSrc);
  shaderSrc = NULL;
  // Compile it
  glCompileShader(shader);
  GLint compileSucceeded = GL_FALSE;
  glGetShaderiv(shader, GL_COMPILE_STATUS, &compileSucceeded);
  if (!compileSucceeded) {
    // Compilation failed. Print error info
    SDL_Log("Compilation of shader %s failed:\n", filename);
    GLint logLength = 0;
    glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &logLength);
    GLchar *errLog = (GLchar*)malloc(logLength);
    if (errLog) {
      glGetShaderInfoLog(shader, logLength, &logLength, errLog);
      SDL_Log("%s\n", errLog);
      free(errLog);
    }
    else {
      SDL_Log("Couldn't get shader log; out of memory\n");
    }
    glDeleteShader(shader);
    shader = 0;
  }
  return shader;
}

GLuint shaderProgLoad(const char *vertFilename, const char *fragFilename) {

  // Load vertex shader file from disk
  GLuint vertShader = shaderLoad(vertFilename, GL_VERTEX_SHADER);
  if (!vertShader) {
    SDL_Log("Couldn't load vertex shader: %s\n", vertFilename);
    return 0;
  }

  // Load fragment shader file from disk
  GLuint fragShader = shaderLoad(fragFilename, GL_FRAGMENT_SHADER);
  if (!fragShader) {
    SDL_Log("Couldn't load fragment shader: %s\n", fragFilename);
    shaderDestroy(vertShader);
    vertShader = 0;
    return 0;
  }

  // Create a shader program out of the two (or more) shaders loaded
  GLuint shaderProg = glCreateProgram();
  if (shaderProg) {
    // Attach the the two shaders to the program
    glAttachShader(shaderProg, vertShader);
    glAttachShader(shaderProg, fragShader);
    // Link the two shaders together
    glLinkProgram(shaderProg);

    GLint linkingSucceeded = GL_FALSE;

    // Get a status (true or false) of the linking process
    glGetProgramiv(shaderProg, GL_LINK_STATUS, &linkingSucceeded);

    // Handle the error if linking the two shaders went wrong
    if (!linkingSucceeded) {
      SDL_Log("Linking shader failed (vert. shader: %s, frag. shader: %s\n", vertFilename, fragFilename);
      GLint logLength = 0;
      glGetProgramiv(shaderProg, GL_INFO_LOG_LENGTH, &logLength);
      GLchar *errLog = (GLchar*)malloc(logLength);
      if (errLog) {
        glGetProgramInfoLog(shaderProg, logLength, &logLength, errLog);
        SDL_Log("%s\n", errLog);
        free(errLog);
      }
      else {
        SDL_Log("Couldn't get shader link log; out of memory\n");
      }
      glDeleteProgram(shaderProg);
      shaderProg = 0;
    }
  }
  else {
    SDL_Log("Couldn't create shader program\n");
  }

  // Free resources
  shaderDestroy(vertShader);
  shaderDestroy(fragShader);

  // Return the resulting shader program
  return shaderProg;
}

/** Creates the Vertex Buffer Object (VBO) containing
* the given vertices.
*
* @param vertices pointer to the array of vertices
* @param numVertices the number of vertices in the array
*/
GLuint vboCreate(Vertex *vertices, GLuint numVertices) {
  // Create the Vertex Buffer Object
  GLuint vbo;
  int nBuffers = 1;
  // Create a buffer
  glGenBuffers(nBuffers, &vbo);
  // Make the buffer a VBO buffer
  glBindBuffer(GL_ARRAY_BUFFER, vbo);
  // Copy the vertices data in the buffer, and deactivate with glBindBuffer(GL_ARRAY_BUFFER, 0);
  glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * numVertices, vertices, GL_STATIC_DRAW);
  glBindBuffer(GL_ARRAY_BUFFER, 0);
  // Check for problems
  GLenum err = glGetError();
  if (err != GL_NO_ERROR) {
    // Failed
    glDeleteBuffers(nBuffers, &vbo);
    SDL_Log("Creating VBO failed, code %u\n", err);
    vbo = 0;
  }
  return vbo;
}


/** Frees the VBO.
*
* @param vbo the VBO's name.
*/
void vboFree(GLuint vbo) {
  glDeleteBuffers(1, &vbo);
}

void freeResources(void){
    SDL_GL_DeleteContext(openGLContext);
  shaderProgDestroy(shaderProg);
  SDL_Quit();
  SDL_DestroyWindow(window);
}


int main(int argc, char* args[]){
    // SDL2 video init
    SDL_Init( SDL_INIT_VIDEO | SDL_INIT_TIMER );

    // Setting openGL attributes
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 2);
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 0);
    // Enable double buffering
    SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
    // Enable hardware accelaration if available
    SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
    glewExperimental = GL_TRUE;

    // Get window
    window = SDL_CreateWindow( "Test", SDL_WINDOWPOS_UNDEFINED, 
    SDL_WINDOWPOS_UNDEFINED, 800, 600, SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL);
    // Get openGL context
    openGLContext = SDL_GL_CreateContext(window);
    // Init glew
    glewInit();

    // ft globally defined with FT_Library ft
    FT_Init_FreeType(&ft);

    // face globally defined with FT_Face face;
    FT_New_Face(ft, "LiberationMono-Bold.ttf", 0, &face);


    // All return values of the init functions above since the point where main starts are normal. No errors are returned.
    // I have skipped the if conditions for the shake of simplicity


    // Load vertex & fragment shaders, compile them, link them together, make a program and return it
    shaderProg = shaderProgLoad("shaderV1.vert", "shaderV1.frag");
    // Activate the program
    glUseProgram(shaderProg);

    // The code up to this point works fine


    // This is where the code from wikipedia starts
    FT_Set_Pixel_Sizes(face, 0, 48);

    glEnable(GL_BLEND);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

    glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

    GLuint vbo;

    // I set the var attribute_coord. Is this right? The code from Wiki did not have any initializations for this variable.
    GLuint attribute_coord=0;

    glGenBuffers(1, &vbo);
    glEnableVertexAttribArray(attribute_coord);
    glBindBuffer(GL_ARRAY_BUFFER, vbo);
    glVertexAttribPointer(attribute_coord, 4, GL_FLOAT, GL_FALSE, 0, 0);

    display();
    // This is where the code from wikipedia ends

    while (1){

      // Wait
      SDL_Delay(10);
    }


    // A function that free resources
    freeResources();

    return 0;
  }

1 个答案:

答案 0 :(得分:0)

问题很简单,您忘记设置统一变量texcolor(对于您的情况,tex是不必要的,因为默认情况下将其设置为0)。

在链接程序(tex之后,通过color确定活动程序资源glGetUniformLocationglLinkProgram的统一位置。 在着色器程序成为当前程序(glUniform1i)之后,分别通过glUniform4fv分别设置gUseProgram的制服:

shaderProg = shaderProgLoad("shaderV1.vert", "shaderV1.frag");

GLuint tex_loc   = glGetUniformLocation( shaderProg, "tex" );
GLuint color_loc = glGetUniformLocation( shaderProg, "color" );

// Activate the program
glUseProgram(shaderProg);

glUniform1i( tex_loc, 0 ); // 0, because the texture is bound to of texture unit 0
float col[4] = { 1.0f, 0.0f, 0.0, 1.0f }; // red and opaque
glUniform4fv( color_loc, 1, col);