我正在开发一个绘图应用程序,我注意到在32位iPad和64位iPad上加载的纹理存在显着差异。
这是在32位iPad上绘制的纹理:
这是在64位iPad上绘制的纹理:
64位是我想要的,但似乎它可能会丢失一些数据?
我使用以下代码创建默认画笔纹理:
UIGraphicsBeginImageContext(CGSizeMake(64, 64));
CGContextRef defBrushTextureContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(defBrushTextureContext);
size_t num_locations = 3;
CGFloat locations[3] = { 0.0, 0.8, 1.0 };
CGFloat components[12] = { 1.0,1.0,1.0, 1.0,
1.0,1.0,1.0, 1.0,
1.0,1.0,1.0, 0.0 };
CGColorSpaceRef myColorspace = CGColorSpaceCreateDeviceRGB();
CGGradientRef myGradient = CGGradientCreateWithColorComponents (myColorspace, components, locations, num_locations);
CGPoint myCentrePoint = CGPointMake(32, 32);
float myRadius = 20;
CGGradientDrawingOptions options = kCGGradientDrawsBeforeStartLocation | kCGGradientDrawsAfterEndLocation;
CGContextDrawRadialGradient (UIGraphicsGetCurrentContext(), myGradient, myCentrePoint,
0, myCentrePoint, myRadius,
options);
CFRelease(myGradient);
CFRelease(myColorspace);
UIGraphicsPopContext();
[self setBrushTexture:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
然后实际设置画笔纹理:
-(void) setBrushTexture:(UIImage*)brushImage{
// save our current texture.
currentTexture = brushImage;
// first, delete the old texture if needed
if (brushTexture){
glDeleteTextures(1, &brushTexture);
brushTexture = 0;
}
// fetch the cgimage for us to draw into a texture
CGImageRef brushCGImage = brushImage.CGImage;
// Make sure the image exists
if(brushCGImage) {
// Get the width and height of the image
GLint width = CGImageGetWidth(brushCGImage);
GLint height = CGImageGetHeight(brushCGImage);
// Texture dimensions must be a power of 2. If you write an application that allows users to supply an image,
// you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2.
// Allocate memory needed for the bitmap context
GLubyte* brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
CGContextRef brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushCGImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushCGImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
// Release the image data; it's no longer needed
free(brushData);
}
}
更新
我已将CGFloats更新为GLfloats但没有成功。也许这个渲染代码存在问题?
if(frameBuffer){
// draw the stroke element
[self prepOpenGLStateForFBO:frameBuffer];
[self prepOpenGLBlendModeForColor:element.color];
CheckGLError();
}
// find our screen scale so that we can convert from
// points to pixels
GLfloat scale = self.contentScaleFactor;
// fetch the vertex data from the element
struct Vertex* vertexBuffer = [element generatedVertexArrayWithPreviousElement:previousElement forScale:scale];
glLineWidth(2);
// if the element has any data, then draw it
if(vertexBuffer){
glVertexPointer(2, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Position[0]);
glColorPointer(4, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Color[0]);
glTexCoordPointer(2, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Texture[0]);
glDrawArrays(GL_TRIANGLES, 0, (GLint)[element numberOfSteps] * (GLint)[element numberOfVerticesPerStep]);
CheckGLError();
}
if(frameBuffer){
[self unprepOpenGLState];
}
顶点结构如下:
struct Vertex{
GLfloat Position[2]; // x,y position
GLfloat Color [4]; // rgba color
GLfloat Texture[2]; // x,y texture coord
};
更新
这个问题实际上并不是基于32位,基于64位,而是A7 GPU和GL驱动程序的不同之处。我通过在64位iPad上运行32位构建和64位构建来发现这一点。在应用程序的两个版本中,纹理最终看起来完全相同。
答案 0 :(得分:1)
我希望你检查两件事。
检查OpenGL中的alpha混合逻辑(或选项)。
检查插值逻辑,该逻辑与拖动速度成正比。
您似乎没有第二个或无效的绘制应用程序
答案 1 :(得分:1)
我认为问题不在于纹理,而是在你将线条元素合成的帧缓冲区中。
您的代码片段看起来像是逐段绘制,因此有几个重叠的段在彼此之上绘制。如果帧缓冲的深度较低,则会出现伪影,特别是在混合区域的较亮区域。
您可以使用Xcode的OpenGL调试器检查帧缓冲区。通过在设备上运行代码激活它,然后单击“Capture OpenGL ES Frame”按钮:。
在“Debug Navigator”中选择“glBindFramebuffer”命令并查看控制台区域中的帧缓冲区描述:
有趣的部分是GL_FRAMEBUFFER_INTERNAL_FORMAT
。
答案 2 :(得分:1)
在我看来,问题在于你在编写不同的图像传递时使用的混合模式。我假设您上传的纹理仅用于显示,并将内存中的图像保留在您合成不同绘图操作的位置,或者您使用glReadPixels读回图像内容? 基本上,您的第二个图像看起来像一个直接的alpha图像,就像预先乘以的alpha图像一样。 为了确保它不是纹理问题,在上传到纹理之前将NSImage保存到文件,并检查图像是否实际正确。