我是Android的OpenGL ES 2.0新手,我想制作简单的应用程序来培养我的技能。我在投射位置时遇到问题。我在我的应用程序中绘制了一些正方形,我想要在触摸屏时检测到哪个方块。如果我不使用Model-View-Projection Matrix,一切都很简单,但是当我使用MVP Matrix时,我只是不明白它是如何工作的。这是一些代码: 在AngelicRenderer班:
@Override public void onDrawFrame(GL10 unused) { GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); // Set the camera position (View matrix) Matrix.setLookAtM(mViewMatrix, 0, 0f, 0f, 3f, 0f, 0f, 0f, 0f, 1f, 0f); // Calculate the projection and view transformation Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0); lock.lock(); try { for(Entry> squares : getObjects()) { for(AngelicSquare square : squares.getValue()) { square.draw(mMVPMatrix); } } } finally { lock.unlock(); } } @Override public void onSurfaceChanged(GL10 unused, int width, int height) { GLES20.glViewport(0, 0, width, height); float ratio = (float) width / height; screenWidth = width; screenHeight = height; // this projection matrix is applied to object coordinates // in the onDrawFrame() method Matrix.frustumM(mProjectionMatrix, 0, -ratio, ratio, -1f, 1f, 1f, 10f); }
在AngelicView类中,onTouchEvent:
for(Entry> squares : devilishRenderer.getObjects()) { for(AngelicSquare square : squares.getValue()) { float[] coords = square.getPosition(); float[] size = square.getSize(); float[] touchPoint = { e.getX(), e.getY(), 0.0f, 0.0f }; float[] touchPointNew = ViewCoordsToGLCoords(touchPoint); float[] topLeft = { coords[0], coords[1], 0.0f, 0.0f }; Matrix.multiplyMV(topLeft, 0, devilishRenderer.getMVPMatrix(), 0, topLeft, 0); float[] bottomRight = { coords[0] + size[0], coords[1] - size[1], 0.0f, 0.0f }; Matrix.multiplyMV(bottomRight, 0, devilishRenderer.getMVPMatrix(), 0, bottomRight, 0); Matrix.multiplyMV(touchPoint, 0, devilishRenderer.getMVPMatrix(), 0, touchPointNew, 0); if(touchPoint[0] > topLeft[0] && touchPoint[1] bottomRight[1]) { hookedSquare = square; } } }
如何使用模型 - 视图 - 投影矩阵将屏幕协调投影到OpenGL协调系统?