使用OpenGL-ES2 Android缩放地图

时间:2013-07-20 06:21:08

标签: java android opengl-es rendering

我用缩放检测器创建了一个缩放变焦,然后调用以下渲染器。 这使用投影矩阵进行缩放,然后在平移时按照缩放比例缩放眼睛。

public class vboCustomGLRenderer  implements GLSurfaceView.Renderer {

    // Store the model matrix. This matrix is used to move models from object space (where each model can be thought
    // of being located at the center of the universe) to world space.

    private float[] mModelMatrix = new float[16];


    // Store the view matrix. This can be thought of as our camera. This matrix transforms world space to eye space;
    // it positions things relative to our eye.

    private float[] mViewMatrix = new float[16];

    // Store the projection matrix. This is used to project the scene onto a 2D viewport.
    private float[] mProjectionMatrix = new float[16];

    // Allocate storage for the final combined matrix. This will be passed into the shader program.
    private float[] mMVPMatrix = new float[16];

    // This will be used to pass in the transformation matrix.
    private int mMVPMatrixHandle;

    // This will be used to pass in model position information.
    private int mPositionHandle;

    // This will be used to pass in model color information.
    private int mColorUniformLocation;

    // How many bytes per float.
    private final int mBytesPerFloat = 4;   

    // Offset of the position data.
    private final int mPositionOffset = 0;

    // Size of the position data in elements.
    private final int mPositionDataSize = 3;

    // How many elements per vertex for double values.
    private final int mPositionFloatStrideBytes = mPositionDataSize * mBytesPerFloat;


    // Position the eye behind the origin.
    public double eyeX = default_settings.mbrMinX + ((default_settings.mbrMaxX - default_settings.mbrMinX)/2);
    public double eyeY = default_settings.mbrMinY + ((default_settings.mbrMaxY - default_settings.mbrMinY)/2);

    // Position the eye behind the origin.
    //final float eyeZ = 1.5f;
    public float eyeZ = 1.5f;

    // We are looking toward the distance
    public double lookX = eyeX;
    public double lookY = eyeY;
    public float lookZ = 0.0f;

    // Set our up vector. This is where our head would be pointing were we holding the camera.
    public float upX = 0.0f;
    public float upY = 1.0f;
    public float upZ = 0.0f;

    public double mScaleFactor = 1;
    public double mScrnVsMapScaleFactor = 0;

    public vboCustomGLRenderer() {}

    public void setEye(double x, double y){

        eyeX -= (x / screen_vs_map_horz_ratio);
        lookX = eyeX;
        eyeY += (y / screen_vs_map_vert_ratio);
        lookY = eyeY;

        // Set the camera position (View matrix)
        Matrix.setLookAtM(mViewMatrix, 0, (float)eyeX, (float)eyeY, eyeZ, (float)lookX, (float)lookY, lookZ, upX, upY, upZ);

    }

    public void setScaleFactor(float scaleFactor, float gdx, float gdy){

        mScaleFactor *= scaleFactor;

        mRight = mRight / scaleFactor;
        mLeft = -mRight;
        mTop = mTop / scaleFactor;
        mBottom = -mTop;

        //Need to calculate the shift in the eye when zooming on a particular spot.
        //So get the distance between the zoom point and eye point, figure out the
        //new eye point by getting the factor of this distance.
        double eyeXShift = (((mWidth  / 2) - gdx) - (((mWidth  / 2) - gdx) / scaleFactor));
        double eyeYShift = (((mHeight / 2) - gdy) - (((mHeight / 2) - gdy) / scaleFactor));

        screen_vs_map_horz_ratio = (mWidth/(mRight-mLeft));
        screen_vs_map_vert_ratio = (mHeight/(mTop-mBottom));

        eyeX -= (eyeXShift / screen_vs_map_horz_ratio);
        lookX = eyeX;
        eyeY += (eyeYShift / screen_vs_map_vert_ratio);
        lookY = eyeY;

        // Set the scale (Projection matrix)
        Matrix.frustumM(mProjectionMatrix, 0, (float)mLeft, (float)mRight, (float)mBottom, (float)mTop, near, far);
    }

    @Override
    public void onSurfaceCreated(GL10 unused, EGLConfig config) {


        // Set the background frame color
        //White
        GLES20.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);

        // Set the view matrix. This matrix can be said to represent the camera position.
        // NOTE: In OpenGL 1, a ModelView matrix is used, which is a combination of a model and
        // view matrix. In OpenGL 2, we can keep track of these matrices separately if we choose.
        Matrix.setLookAtM(mViewMatrix, 0, (float)eyeX, (float)eyeY, eyeZ, (float)lookX, (float)lookY, lookZ, upX, upY, upZ);

        final String vertexShader =
            "uniform mat4 u_MVPMatrix;      \n"     // A constant representing the combined model/view/projection matrix.

          + "attribute vec4 a_Position;     \n"     // Per-vertex position information we will pass in.
          + "attribute vec4 a_Color;        \n"     // Per-vertex color information we will pass in.              

          + "varying vec4 v_Color;          \n"     // This will be passed into the fragment shader.

          + "void main()                    \n"     // The entry point for our vertex shader.
          + "{                              \n"
          + "   v_Color = a_Color;          \n"     // Pass the color through to the fragment shader. 
                                                    // It will be interpolated across the triangle.
          + "   gl_Position = u_MVPMatrix   \n"     // gl_Position is a special variable used to store the final position.
          + "               * a_Position;   \n"     // Multiply the vertex by the matrix to get the final point in                                                                   
          + "}                              \n";    // normalized screen coordinates.

        final String fragmentShader =
                "precision mediump float;       \n"     // Set the default precision to medium. We don't need as high of a 
                                                        // precision in the fragment shader.                
              + "uniform vec4 u_Color;          \n"     // This is the color from the vertex shader interpolated across the 
                                                        // triangle per fragment.             
              + "void main()                    \n"     // The entry point for our fragment shader.
              + "{                              \n"
              + "   gl_FragColor = u_Color;     \n"     // Pass the color directly through the pipeline.          
              + "}                              \n";                                                

        // Load in the vertex shader.
        int vertexShaderHandle = GLES20.glCreateShader(GLES20.GL_VERTEX_SHADER);

        if (vertexShaderHandle != 0) 
        {
            // Pass in the shader source.
            GLES20.glShaderSource(vertexShaderHandle, vertexShader);

            // Compile the shader.
            GLES20.glCompileShader(vertexShaderHandle);

            // Get the compilation status.
            final int[] compileStatus = new int[1];
            GLES20.glGetShaderiv(vertexShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);

            // If the compilation failed, delete the shader.
            if (compileStatus[0] == 0) 
            {               
                GLES20.glDeleteShader(vertexShaderHandle);
                vertexShaderHandle = 0;
            }
        }

        if (vertexShaderHandle == 0)
        {
            throw new RuntimeException("Error creating vertex shader.");
        }

        // Load in the fragment shader shader.
        int fragmentShaderHandle = GLES20.glCreateShader(GLES20.GL_FRAGMENT_SHADER);

        if (fragmentShaderHandle != 0) 
        {
            // Pass in the shader source.
            GLES20.glShaderSource(fragmentShaderHandle, fragmentShader);

            // Compile the shader.
            GLES20.glCompileShader(fragmentShaderHandle);

            // Get the compilation status.
            final int[] compileStatus = new int[1];
            GLES20.glGetShaderiv(fragmentShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);

            // If the compilation failed, delete the shader.
            if (compileStatus[0] == 0) 
            {               
                GLES20.glDeleteShader(fragmentShaderHandle);
                fragmentShaderHandle = 0;
            }
        }

        if (fragmentShaderHandle == 0)
        {
            throw new RuntimeException("Error creating fragment shader.");
        }

        // Create a program object and store the handle to it.
        int programHandle = GLES20.glCreateProgram();

        if (programHandle != 0) 
        {
            // Bind the vertex shader to the program.
            GLES20.glAttachShader(programHandle, vertexShaderHandle);           

            // Bind the fragment shader to the program.
            GLES20.glAttachShader(programHandle, fragmentShaderHandle);

            // Bind attributes
            GLES20.glBindAttribLocation(programHandle, 0, "a_Position");
            GLES20.glBindAttribLocation(programHandle, 1, "a_Color");

            // Link the two shaders together into a program.
            GLES20.glLinkProgram(programHandle);

            // Get the link status.
            final int[] linkStatus = new int[1];
            GLES20.glGetProgramiv(programHandle, GLES20.GL_LINK_STATUS, linkStatus, 0);

            // If the link failed, delete the program.
            if (linkStatus[0] == 0) 
            {               
                GLES20.glDeleteProgram(programHandle);
                programHandle = 0;
            }
        }

        if (programHandle == 0)
        {
            throw new RuntimeException("Error creating program.");
        }

        // Set program handles. These will later be used to pass in values to the program.
        mMVPMatrixHandle = GLES20.glGetUniformLocation(programHandle, "u_MVPMatrix");
        mPositionHandle = GLES20.glGetAttribLocation(programHandle, "a_Position");
        mColorUniformLocation = GLES20.glGetUniformLocation(programHandle, "u_Color");

        // Tell OpenGL to use this program when rendering.
        GLES20.glUseProgram(programHandle);
    }

    static double mWidth = 0;
    static double mHeight = 0;
    static double mLeft = 0;
    static double mRight = 0;
    static double mTop = 0;
    static double mBottom = 0;
    static double mRatio = 0;
    double screen_width_height_ratio;
    double screen_height_width_ratio;
    final float near = 1.5f;
    final float far = 10.0f;

    double screen_vs_map_horz_ratio = 0;
    double screen_vs_map_vert_ratio = 0;

    @Override
    public void onSurfaceChanged(GL10 unused, int width, int height) {

        // Adjust the viewport based on geometry changes,
        // such as screen rotation
        // Set the OpenGL viewport to the same size as the surface.
        GLES20.glViewport(0, 0, width, height);

        screen_width_height_ratio = (double) width / height;
        screen_height_width_ratio = (double) height / width;

        //Initialize
        if (mRatio == 0){
            mWidth = (double) width;
            mHeight = (double) height;

            //map height to width ratio
            double map_extents_width = default_settings.mbrMaxX - default_settings.mbrMinX;
            double map_extents_height = default_settings.mbrMaxY - default_settings.mbrMinY;
            double map_width_height_ratio = map_extents_width/map_extents_height;
            if (screen_width_height_ratio > map_width_height_ratio){
                mRight = (screen_width_height_ratio * map_extents_height)/2;
                mLeft = -mRight;
                mTop = map_extents_height/2;
                mBottom = -mTop;
            }
            else{
                mRight = map_extents_width/2;
                mLeft = -mRight;
                mTop = (screen_height_width_ratio * map_extents_width)/2;
                mBottom = -mTop;
            }

            mRatio = screen_width_height_ratio;
        }

        if (screen_width_height_ratio != mRatio){
            final double wRatio = width/mWidth;
            final double oldWidth = mRight - mLeft;
            final double newWidth = wRatio * oldWidth;
            final double widthDiff = (newWidth - oldWidth)/2;
            mLeft = mLeft - widthDiff;
            mRight = mRight + widthDiff;

            final double hRatio = height/mHeight;
            final double oldHeight = mTop - mBottom;
            final double newHeight = hRatio * oldHeight;
            final double heightDiff = (newHeight - oldHeight)/2;
            mBottom = mBottom - heightDiff;
            mTop = mTop + heightDiff;

            mWidth = (double) width;
            mHeight = (double) height;

            mRatio = screen_width_height_ratio;
        }

        screen_vs_map_horz_ratio = (mWidth/(mRight-mLeft));
        screen_vs_map_vert_ratio = (mHeight/(mTop-mBottom));

        Matrix.frustumM(mProjectionMatrix, 0, (float)mLeft, (float)mRight, (float)mBottom, (float)mTop, near, far);

    }


    ListIterator<mapLayer> orgNonAssetCatLayersList_it;
    ListIterator<FloatBuffer> mapLayerObjectList_it;
    ListIterator<Byte> mapLayerObjectTypeList_it;
    mapLayer MapLayer;

    @Override
    public void onDrawFrame(GL10 unused) {

        GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);

        drawPreset();

        orgNonAssetCatLayersList_it = default_settings.orgNonAssetCatMappableLayers.listIterator();
        while (orgNonAssetCatLayersList_it.hasNext()) {
            MapLayer = orgNonAssetCatLayersList_it.next();
            if (MapLayer.BatchedPointVBO != null){
            }
            if (MapLayer.BatchedLineVBO != null){
                drawLineString(MapLayer.BatchedLineVBO, MapLayer.lineStringObjColor);
            }
            if (MapLayer.BatchedPolygonVBO != null){
                drawPolygon(MapLayer.BatchedPolygonVBO, MapLayer.polygonObjColor);
            }
        }
    }

    private void drawPreset()
    {
        Matrix.setIdentityM(mModelMatrix, 0);

        // This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
        // (which currently contains model * view).
        Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);

        // This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
        // (which now contains model * view * projection).
        Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);

        GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
    }

    private void drawLineString(final FloatBuffer geometryBuffer, final float[] colorArray)
    {
        // Pass in the position information
        geometryBuffer.position(mPositionOffset);
        GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);

        GLES20.glEnableVertexAttribArray(mPositionHandle);

        GLES20.glUniform4f(mColorUniformLocation, colorArray[0], colorArray[1], colorArray[2], 1f);

        GLES20.glLineWidth(2.0f);
        GLES20.glDrawArrays(GLES20.GL_LINES, 0, geometryBuffer.capacity()/mPositionDataSize);
    }

    private void drawPolygon(final FloatBuffer geometryBuffer, final float[] colorArray)
    {
        // Pass in the position information
        geometryBuffer.position(mPositionOffset);
        GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);

        GLES20.glEnableVertexAttribArray(mPositionHandle);

        GLES20.glUniform4f(mColorUniformLocation, colorArray[0], colorArray[1], colorArray[2], 1f);

        GLES20.glLineWidth(1.0f);
        GLES20.glDrawArrays(GLES20.GL_LINES, 0, geometryBuffer.capacity()/mPositionDataSize);
    }
}

这很有效,直到达到一定水平然后平移开始跳跃。经过测试我发现那是因为眼睛的浮点值,无法应付这么小的位置偏移。我将x和y眼睛的值保持在双精度值,以便继续计算换档位置,然后在调用setLookAtM()时将其转换为浮点数。

所以我需要改变缩放的工作方式。我在想的不是使用投影进行缩放,而是将模型缩放得更大或更小。 我的代码中的setScaleFactor()函数将通过删除投影和眼图移位来改变。 有一个Matrix.scaleM(m,Offset,x,y,z)函数,但我不确定如何或在何处实现它。 可以使用一些关于如何实现这一目标的建议。

[编辑] 2013年7月24日 我尝试改变setScaleFactor(),如下所示:

public void setScaleFactor(float scaleFactor, float gdx, float gdy){
    mScaleFactor *= scaleFactor;
}

和drawPreset()

private void drawPreset()
{
    Matrix.setIdentityM(mModelMatrix, 0);

        //*****Added scaleM
    Matrix.scaleM(mModelMatrix, 0, (float)mScaleFactor, (float)mScaleFactor, 1.0f);

    // This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
    // (which currently contains model * view).
    Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);

    // This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
    // (which now contains model * view * projection).
    Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);

    GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
}

现在,只要进行缩放,图像就会从屏幕上消失。 实际上我发现它就在右边。我仍然可以平静下来。

仍然不确定我应该缩放到缩放,是模型,视图还是视图模型?

1 个答案:

答案 0 :(得分:1)

我发现如果你将模型的中心带回原点(0,0),它可以让你扩展你的缩放功能。我的x坐标数据介于152.6和152.7之间。

通过偏移量152.65将其恢复到原点,需要在将数据加载到浮动缓冲区之前将其应用于数据。

因此,每侧的数据宽度变为0.1或0.05,从而可以在值的尾端获得更高的精度。