我正在尝试将HLS视频用作GLKView
中的纹理。我正在设置这样的视频输出:
NSDictionary* settings = @{
(id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange],
(id)kCVPixelBufferOpenGLCompatibilityKey : [NSNumber numberWithBool:YES]
};
self.videoOutput = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:settings];
[self.videoOutput requestNotificationOfMediaDataChangeWithAdvanceInterval:ONE_FRAME_DURATION];
[playerItem addOutput:self.videoOutput];
当输出报告新数据到达时,我将其解压缩如下:
outputItemTime = [self.videoOutput itemTimeForHostTime:nextVSync]; // this is calculated from the timestamps
if ([self.videoOutput hasNewPixelBufferForItemTime:outputItemTime]) {
CVPixelBufferRef pixelBuffer = NULL;
pixelBuffer = [self.videoOutput copyPixelBufferForItemTime:outputItemTime itemTimeForDisplay:NULL];
像素缓冲区出现在日志中,如下所示:
<CVPixelBuffer 0x146d1b50 width=1024 height=680 pixelFormat=420v iosurface=0x1657fb24 planes=2>
<Plane 0 width=1024 height=680 bytesPerRow=1024>
<Plane 1 width=512 height=340 bytesPerRow=1024>
<attributes=<CFBasicHash 0x145da410 [0x3b84aae0]>{type = immutable dict, count = 5,
entries =>
0 : <CFString 0x3b8cac84 [0x3b84aae0]>{contents = "Width"} = <CFNumber 0x145c2260 [0x3b84aae0]>{value = +1024.0000000000, type = kCFNumberFloat32Type}
1 : <CFString 0x3b8cb134 [0x3b84aae0]>{contents = "OpenGLCompatibility"} = <CFBoolean 0x3b84ae90 [0x3b84aae0]>{value = true}
3 : <CFString 0x3b8cb154 [0x3b84aae0]>{contents = "IOSurfaceProperties"} = <CFBasicHash 0x145d9e40 [0x3b84aae0]>{type = mutable dict, count = 1,
entries =>
2 : <CFString 0x3b8cada4 [0x3b84aae0]>{contents = "IOSurfacePurgeWhenNotInUse"} = <CFBoolean 0x3b84ae90 [0x3b84aae0]>{value = true}
}
5 : <CFString 0x3b8cb0e4 [0x3b84aae0]>{contents = "PixelFormatType"} = <CFNumber 0x145c2160 [0x3b84aae0]>{value = +875704438, type = kCFNumberSInt32Type}
6 : <CFString 0x3b8cac94 [0x3b84aae0]>{contents = "Height"} = <CFNumber 0x145c2270 [0x3b84aae0]>{value = +680.0000000000, type = kCFNumberFloat32Type}
}
propagatedAttachments=<CFBasicHash 0x146d1bb0 [0x3b84aae0]>{type = mutable dict, count = 7,
entries =>
0 : <CFString 0x3b8caff4 [0x3b84aae0]>{contents = "CVImageBufferTransferFunction"} = <CFString 0x146aef90 [0x3b84aae0]>{contents = "ITU_R_709_2"}
1 : <CFString 0x3b8caf74 [0x3b84aae0]>{contents = "CVImageBufferYCbCrMatrix"} = <CFString 0x146d1b30 [0x3b84aae0]>{contents = "ITU_R_709_2"}
3 : <CFString 0x3b8cb014 [0x3b84aae0]>{contents = "CVImageBufferChromaLocationTopField"} = <CFString 0x3b83d1d0 [0x3b84aae0]>{contents = "Center"}
9 : <CFString 0x3b8cb024 [0x3b84aae0]>{contents = "CVImageBufferChromaLocationBottomField"} = <CFString 0x3b83d1d0 [0x3b84aae0]>{contents = "Center"}
10 : <CFString 0x3b8cafb4 [0x3b84aae0]>{contents = "CVImageBufferColorPrimaries"} = <CFString 0x146d1b10 [0x3b84aae0]>{contents = "ITU_R_709_2"}
11 : <CFString 0x3b8caeb4 [0x3b84aae0]>{contents = "CVFieldCount"} = <CFNumber 0x1458d890 [0x3b84aae0]>{value = +1, type = kCFNumberSInt32Type}
12 : <CFString 0x1467cb60 [0x3b84aae0]>{contents = "QTMovieTime"} = <CFBasicHash 0x146c6010 [0x3b84aae0]>{type = mutable dict, count = 2,
entries =>
0 : <CFString 0x14687e80 [0x3b84aae0]>{contents = "TimeValue"} = <CFNumber 0x1458d3b0 [0x3b84aae0]>{value = +0, type = kCFNumberSInt32Type}
2 : <CFString 0x146df3e0 [0x3b84aae0]>{contents = "TimeScale"} = <CFNumber 0x146c6040 [0x3b84aae0]>{value = +90000, type = kCFNumberSInt32Type}
}
}
nonPropagatedAttachments=<CFBasicHash 0x1468f700 [0x3b84aae0]>{type = mutable dict, count = 0,
entries =>
}
然后我从缓冲区中提取纹理,如下所示:
CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RED_EXT,
frameWidth,
frameHeight,
GL_RED_EXT,
GL_UNSIGNED_BYTE,
0,
&_lumaTexture);
CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RG_EXT,
frameWidth / 2,
frameHeight / 2,
GL_RG_EXT,
GL_UNSIGNED_BYTE,
1,
&_chromaTexture);
它们出现在日志中,如下所示。亮度:
< CVOpenGLESTextureRef 0x19d7d210 size=1024,680 target=0de1 name=1 isFlipped=YES propagatedAttachments=<CFBasicHash 0x19d7f1f0 [0x3b84aae0]>{type = mutable dict, count = 0,
entries =>
}
nonPropagatedAttachments=<CFBasicHash 0x19d7e180 [0x3b84aae0]>{type = mutable dict, count = 0,
entries =>
}
色度:
< CVOpenGLESTextureRef 0x19d7d2a0 size=1024,680 target=0de1 name=2 isFlipped=YES propagatedAttachments=<CFBasicHash 0x19d7e330 [0x3b84aae0]>{type = mutable dict, count = 0,
entries =>
}
nonPropagatedAttachments=<CFBasicHash 0x19d793b0 [0x3b84aae0]>{type = mutable dict, count = 0,
entries =>
}
之后我将它们绑定到我的场景中:
- (void)bindTexture:(CVOpenGLESTextureRef)texture luma:(BOOL)luma {
GLKEffectPropertyTexture *t2d = luma ? self.effect.texture2d0 : self.effect.texture2d1;
if (texture) {
if (t2d.name != 0) {
GLuint name = t2d.name;
glDeleteTextures(1, &name);
}
[self updateVertexData]; // this does not perform any OpenGL calls, just deals with arrays in memory
glBindTexture(CVOpenGLESTextureGetTarget(texture), CVOpenGLESTextureGetName(texture));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
t2d.enabled = GL_TRUE;
t2d.envMode = GLKTextureEnvModeReplace;
t2d.target = GLKTextureTarget2D;
t2d.name = CVOpenGLESTextureGetName(texture);
}
}
这会读取视频,但不是纹理我在表面上会产生一些数字噪音,随着视频的进展而变化。我做错了什么?
我怀疑原始像素缓冲区中的PixelFormatType不是我要求或期望的。有办法检查吗?
答案 0 :(得分:0)
你设置了EaglContext吗?
EAGLContext * context = [[[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2] autorelease];
[EAGLContext setCurrentContext:context];
如果是,_videoTextureCache
,是否已成功创建?您还应该在从CVPixelBuffer创建纹理之前将其刷新。
CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, [EAGLContext currentContext], NULL, &_videoTextureCache);
CVOpenGLESTextureCacheFlush(_videoTextureCache, 0);
最后一件事,在每行代码之间使用NSLog("%d",glGetError());
,0表示没有错误,另一个数字表示存在错误。
希望这会有所帮助..