您知道具有CameraRipple效果的Apple示例代码吗?好吧,我试图在openGL完成水的所有冷却效果之后将相机输出记录在一个文件中。
我用glReadPixels完成了它,在那里我读取了void *缓冲区中的所有像素,创建了CVPixelBufferRef并将其附加到AVAssetWriterInputPixelBufferAdaptor,但它太慢了,因为readPixels需要花费大量时间。我发现使用FBO和纹理现金你可以做同样的事情,但速度更快。这是Apple使用的drawInRect方法中的代码:
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)_context, NULL, &coreVideoTextureCashe);
if (err)
{
NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreate %d");
}
CFDictionaryRef empty; // empty value for attr value.
CFMutableDictionaryRef attrs2;
empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs2 = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs2,
kCVPixelBufferIOSurfacePropertiesKey,
empty);
//CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &renderTarget);
CVPixelBufferRef pixiel_bufer4e = NULL;
CVPixelBufferCreate(kCFAllocatorDefault,
(int)_screenWidth,
(int)_screenHeight,
kCVPixelFormatType_32BGRA,
attrs2,
&pixiel_bufer4e);
CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
coreVideoTextureCashe, pixiel_bufer4e,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
(int)_screenWidth,
(int)_screenHeight,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTexture);
CFRelease(attrs2);
CFRelease(empty);
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);
CVPixelBufferLockBaseAddress(pixiel_bufer4e, 0);
if([pixelAdapter appendPixelBuffer:pixiel_bufer4e withPresentationTime:currentTime]) {
float result = currentTime.value;
NSLog(@"\n\n\4eta danni i current time e : %f \n\n",result);
currentTime = CMTimeAdd(currentTime, frameLength);
}
CVPixelBufferUnlockBaseAddress(pixiel_bufer4e, 0);
CVPixelBufferRelease(pixiel_bufer4e);
CFRelease(renderTexture);
CFRelease(coreVideoTextureCashe);
它录制的视频非常快,但视频只是黑色我认为textureCasheRef不是正确的,或者我填错了。
作为更新,这是我尝试过的另一种方式。我肯定错过了什么。在viewDidLoad中,在我设置openGL上下文后,我这样做:
CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)_context, NULL, &coreVideoTextureCashe);
if (err)
{
NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreate %d");
}
//creats the pixel buffer
pixel_buffer = NULL;
CVPixelBufferPoolCreatePixelBuffer (NULL, [pixelAdapter pixelBufferPool], &pixel_buffer);
CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, coreVideoTextureCashe, pixel_buffer,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
(int)screenWidth,
(int)screenHeight,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTexture);
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);
然后在drawInRect中:我这样做:
if(isRecording&&writerInput.readyForMoreMediaData) {
CVPixelBufferLockBaseAddress(pixel_buffer, 0);
if([pixelAdapter appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) {
currentTime = CMTimeAdd(currentTime, frameLength);
}
CVPixelBufferLockBaseAddress(pixel_buffer, 0);
CVPixelBufferRelease(pixel_buffer);
}
然而它在renderTexture上遇到了bad_acsess,它不是nil而是0x000000001。
的更新 的
使用下面的代码我实际设法拉动视频文件,但有一些绿色和红色闪烁。我使用BGRA pixelFormatType。
这里我创建纹理Cache:
CVReturn err2 = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)_context, NULL, &coreVideoTextureCashe);
if (err2)
{
NSLog(@"Error at CVOpenGLESTextureCacheCreate %d", err);
return;
}
然后在drawInRect中我称之为:
if(isRecording&&writerInput.readyForMoreMediaData) {
[self cleanUpTextures];
CFDictionaryRef empty; // empty value for attr value.
CFMutableDictionaryRef attrs2;
empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs2 = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs2,
kCVPixelBufferIOSurfacePropertiesKey,
empty);
//CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &renderTarget);
CVPixelBufferRef pixiel_bufer4e = NULL;
CVPixelBufferCreate(kCFAllocatorDefault,
(int)_screenWidth,
(int)_screenHeight,
kCVPixelFormatType_32BGRA,
attrs2,
&pixiel_bufer4e);
CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
coreVideoTextureCashe, pixiel_bufer4e,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
(int)_screenWidth,
(int)_screenHeight,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTexture);
CFRelease(attrs2);
CFRelease(empty);
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);
CVPixelBufferLockBaseAddress(pixiel_bufer4e, 0);
if([pixelAdapter appendPixelBuffer:pixiel_bufer4e withPresentationTime:currentTime]) {
float result = currentTime.value;
NSLog(@"\n\n\4eta danni i current time e : %f \n\n",result);
currentTime = CMTimeAdd(currentTime, frameLength);
}
CVPixelBufferUnlockBaseAddress(pixiel_bufer4e, 0);
CVPixelBufferRelease(pixiel_bufer4e);
CFRelease(renderTexture);
// CFRelease(coreVideoTextureCashe);
}
我知道我可以通过不在这里完成所有这些事情来优化这一点,但我想用它来使它工作。在cleanUpTextures中,我使用:
刷新textureCache CVOpenGLESTextureCacheFlush(coreVideoTextureCashe, 0);
RGBA的东西可能有问题,或者我不知道,但似乎它仍然有点错误的Cache。
答案 0 :(得分:4)
对于录制视频,这不是我使用的方法。你正在为每个渲染帧创建一个新的像素缓冲区,这个缓冲区很慢,你永远不会释放它,所以你得到内存警告也就不足为奇了。
相反,请遵循我在this answer中描述的内容。我为缓存的纹理创建一个像素缓冲区,将该纹理分配给我正在渲染的FBO,然后在每个帧上使用AVAssetWriter的像素缓冲输入附加该像素缓冲区。使用单个像素缓冲区要比每帧重建一个像素缓冲区要快得多。您还希望保留与FBO纹理目标关联的像素缓冲区,而不是在每个帧上关联它。
如果你想看看它在实践中是如何工作的,我将这个录制代码封装在我的开源GPUImage框架中的GPUImageMovieWriter中。正如我在上面提到的答案中指出的那样,以这种方式进行录制会导致极快的编码。