使用glReadPixels读取Tap Point的值? OpenGL 2.0 iOS

时间:2013-10-21 09:44:31

标签: ios objective-c opengl-es glkit glreadpixels

假设我有一个定义如下的正方形:

typedef struct {
float Position[3];
float Color[4];
} Vertex;

const Vertex Vertices[] = {
{{2, 0, 0}, {1, 0, 0, 1}},
{{4, 0, 0}, {1, 0, 0, 1}},
{{4, 2, 0}, {1, 0, 0, 1}},
{{2, 2, 0}, {1, 0, 0, 1}}
};

const GLubyte Indices[] = {
0, 1, 2,
2, 3, 0
};

我正在应用以下投影和模型视图矩阵:

- (void)update {

//Projection matrix.
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(_projectionAngle), aspect, 4.0f, 10.f);
self.effect.transform.projectionMatrix = projectionMatrix;

//Modelview matrix.
modelViewMatrix = GLKMatrix4MakeTranslation(_xTranslation, _yTranslation, -7.0);
self.effect.transform.modelviewMatrix = modelViewMatrix;

}

我现在想要读取用户点击屏幕的对象的像素颜色。我正在尝试将glkMathUnproject与glReadPixels结合使用,但是glReadPixels为点击点返回了错误的颜色值:

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint tapLoc = [touch locationInView:self.view];

bool testResult;

GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);

GLKVector3 nearPt = GLKMathUnproject(GLKVector3Make(tapLoc.x, (tapLoc.y-1024)*-1, 0.0), modelViewMatrix, projectionMatrix, &viewport[0] , &testResult);

GLKVector3 farPt = GLKMathUnproject(GLKVector3Make(tapLoc.x, (tapLoc.y-1024)*-1, 1.0), modelViewMatrix, projectionMatrix, &viewport[0] , &testResult);

farPt = GLKVector3Subtract(farPt, nearPt);

GLubyte pixelColor[4];
glReadPixels(farPt.v[0], farPt.v[1], 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, &pixelColor[0]);
NSLog(@"pixelColor %u %u %u %u", pixelColor[0],pixelColor[1],pixelColor[2], pixelColor[3]);

}

有人可以建议我如何准确地获得像素颜色吗?

2 个答案:

答案 0 :(得分:1)

查看cocos2d并了解他们如何进行以下检查。

这是我用来获取抽头像素颜色的一些代码。希望它有所帮助。

我实际上已将代码更改为在整个场景上运行,但您可以在特定节点上执行相同操作。如果这样做,请确保将其转换为{0,0},锚点位于{0,0}并进行标识转换,以便正确显示,然后在完成后重置。

    CCScene *runningScene = [CCDirector sharedDirector].runningScene;
    CGRect boundingBox = runningScene.boundingBox;
    CCRenderTexture *renderTexture = [[CCRenderTexture alloc] initWithWidth:(int)CGRectGetWidth(boundingBox)
                                                                     height:(int)CGRectGetHeight(boundingBox)
                                                                pixelFormat:kCCTexture2DPixelFormat_RGBA8888];
    [renderTexture begin];

    [runningScene visit];

    // Get the colour of the pixel at the touched point
    CGPoint location = ccp((point.x - CGRectGetMinX(boundingBox)) * CC_CONTENT_SCALE_FACTOR(),
                           (point.y - CGRectGetMinY(boundingBox)) * CC_CONTENT_SCALE_FACTOR());
    UInt8 data[4];
    glReadPixels((GLint)location.x,(GLint)location.y, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, data);

    [renderTexture end];

    // data[0] = R
    // data[1] = G
    // data[2] = B
    // data[3] = A

答案 1 :(得分:1)

Guy Cogus'代码看起来是正确的方法,但您应该知道为什么您发布的内容不起作用。使用GLKUnproject取消投影向量的目的是让它 out 屏幕空间并进入对象空间(几何体的坐标系在被投影或模型视图矩阵变换之前),而不是进入屏幕空间,这是glReadPixels所在的(根据定义,因为你正在读取像素)。