CVOpenGLESTextureCacheCreateTextureFromImage而不是glReadPixels

时间:2012-10-07 16:14:06

标签: iphone objective-c opengl-es-2.0 glreadpixels

整整3天,我一直在努力提高基于glReadPixels的AVAssetWriter的性能。我已经浏览了Apple的RosyWriter和Camera Ripple代码以及Brad Larson的GPUImage,但我仍然在摸不着头脑。我也一直在尝试使用这些链接中的实现:

rendering-to-a-texture-with-ios-5-texture-cache-api

faster-alternative-to-glreadpixels-in-iphone-opengl-es-2-0

......还有更多,但无论我尝试什么,我都无法让它发挥作用。视频最终没有被处理,或者它出现黑色或我得到各种错误。我不会在这里完成所有这些。

为了简化我的问题,我想我会专注于从屏幕上的openGL预览FBO抓取快照。如果我能够实现这个工作的单一实现,我应该能够解决剩下的问题。我尝试从上面的第一个链接看起来像这样的实现:

CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, [glView context], 
                           NULL, &texCacheRef);

CFDictionaryRef empty = CFDictionaryCreate(kCFAllocatorDefault,
                           NULL,
                           NULL,
                           0,
                           &kCFTypeDictionaryKeyCallBacks,
                           &kCFTypeDictionaryValueCallBacks);

CFMutableDictionaryRef attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
                                  1,
                                  &kCFTypeDictionaryKeyCallBacks,
                                  &kCFTypeDictionaryValueCallBacks);

CFDictionarySetValue(attrs,
                     kCVPixelBufferIOSurfacePropertiesKey,
                     empty);

CVPixelBufferRef renderTarget = NULL;
CVPixelBufferCreate(kCFAllocatorDefault,
                    width,
                    height,
                    kCVPixelFormatType_32BGRA,
                    attrs,
                    &renderTarget);

CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
                                              texCacheRef,
                                              renderTarget,
                                              NULL,
                                              GL_TEXTURE_2D,
                                              GL_RGBA,
                                              width,
                                              height,
                                              GL_BGRA,
                                              GL_UNSIGNED_BYTE,
                                              0,
                                              &renderTexture);

CFRelease(attrs);
CFRelease(empty);
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

GLuint renderFrameBuffer;
glGenRenderbuffers(1, &renderFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, renderFrameBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                       GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);

//Is this really how I pull pixels off my context?
CVPixelBufferLockBaseAddress(renderTarget, 0);
buffer = (GLubyte *)CVPixelBufferGetBaseAddress(renderTarget);
CVPixelBufferUnlockBaseAddress(renderTarget, 0);

究竟应该在这里发生什么?我的缓冲区最终是一堆零,所以我想我需要做一些额外的事情来从上下文中拉出像素? ......或者我错过了什么?

我想要实现的目标与我今天所使用的相比更快:

int pixelsCount = w * h;
buffer = (GLubyte *) malloc(pixelsCount * 4);
glReadPixels(0, 0, w, h, GL_RGBA, GL_UNSIGNED_BYTE, buffer);

1 个答案:

答案 0 :(得分:1)

布拉德指出,我误解了这个概念,并没有做任何实际的渲染。我添加它时工作得很好。