我通过将SDK提供的OpenGL上下文(EAGLContext
)与SceneKit的SCNRenderer
实例相结合,成功地将Vuforia SDK Image Target Tracking feature集成到iOS项目中。这使我能够利用SceneKit的3D API的简单性,同时又受益于Vuforia的高精度图像检测。现在,我想通过用Metal替换OpenGL来做同样的事情。
一些背景故事
我能够使用 OpenGL 在Vuforia绘制的实时视频纹理上绘制SceneKit对象,而不会出现重大问题。
这是我与 OpenGL 一起使用的简化设置:
func configureRenderer(for context: EAGLContext) {
self.renderer = SCNRenderer(context: context, options: nil)
self.scene = SCNScene()
renderer.scene = scene
// other scenekit setup
}
func render() {
// manipulate scenekit nodes
renderer.render(atTime: CFAbsoluteTimeGetCurrent())
}
Apple在iOS 12上弃用OpenGL
自Apple announced that it is deprecating OpenGL on iOS 12起,我认为尝试迁移此项目以使用Metal
而不是OpenGL
是一个好主意。
这在理论上应该很简单,因为Vuforia支持开箱即用的Metal。但是,当尝试将其集成时,我遇到了麻烦。
问题
该视图似乎只渲染SceneKit渲染器的结果或由Vuforia编码的纹理,而不会同时渲染两者。这取决于首先编码的内容。我该怎么做才能将两个结果融合在一起?
简而言之,这是有问题的设置:
func configureRenderer(for device: MTLDevice) {
let renderer = SCNRenderer(device: device, options: nil)
self.scene = SCNScene()
renderer.scene = scene
// other scenekit setup
}
func render(viewport: CGRect, commandBuffer: MTLCommandBuffer, drawable: CAMetalDrawable) {
// manipulate scenekit nodes
let renderPassDescriptor = MTLRenderPassDescriptor()
renderPassDescriptor.colorAttachments[0].texture = drawable.texture
renderPassDescriptor.colorAttachments[0].loadAction = .load
renderPassDescriptor.colorAttachments[0].storeAction = .store
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0, blue: 0, alpha: 0)
renderer!.render(withViewport: viewport, commandBuffer: commandBuffer, passDescriptor: renderPassDescriptor)
}
我尝试在render
之后或encoder.endEncoding
之前致电commandBuffer.renderCommandEncoderWithDescriptor
:
metalDevice = MTLCreateSystemDefaultDevice();
metalCommandQueue = [metalDevice newCommandQueue];
id<MTLCommandBuffer>commandBuffer = [metalCommandQueue commandBuffer];
//// -----> call the `render(viewport:commandBuffer:drawable) here <------- \\\\
id<MTLRenderCommandEncoder> encoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPassDescriptor];
// calls to encoder to render textures from Vuforia
[encoder endEncoding];
//// -----> or here <------- \\\\
[commandBuffer presentDrawable:drawable];
[commandBuffer commit];
在任何一种情况下,我只能看到SCNRenderer
的结果encoder
OR ,但在同一视图中却看不到两者。
在我看来,上面的编码和SCNRenderer.render
都覆盖了彼此的缓冲区。
我在这里想念什么?
答案 0 :(得分:0)
我想我已经找到了答案。 我在 endEncoding 之后渲染 scnrenderer ,但是我正在创建一个新的描述符。
// Pass Metal context data to Vuforia Engine (we may have changed the encoder since
// calling Vuforia::Renderer::begin)
finishRender(UnsafeMutableRawPointer(Unmanaged.passRetained(drawable!.texture).toOpaque()), UnsafeMutableRawPointer(Unmanaged.passRetained(encoder!).toOpaque()))
// ========== Finish Metal rendering ==========
encoder?.endEncoding()
// Commit the rendering commands
// Command completed handler
commandBuffer?.addCompletedHandler { _ in self.mCommandExecutingSemaphore.signal()}
let screenSize = UIScreen.main.bounds.size
let newDescriptor = MTLRenderPassDescriptor()
// Draw to the drawable's texture
newDescriptor.colorAttachments[0].texture = drawable?.texture
// Store the data in the texture when rendering is complete
newDescriptor.colorAttachments[0].storeAction = MTLStoreAction.store
// Use textureDepth for depth operations.
newDescriptor.depthAttachment.texture = mDepthTexture;
renderer?.render(atTime: 0, viewport: CGRect(x: 0, y: 0, width: screenSize.width, height: screenSize.height), commandBuffer: commandBuffer!, passDescriptor: newDescriptor)
// Present the drawable when the command buffer has been executed (Metal
// calls to CoreAnimation to tell it to put the texture on the display when
// the rendering is complete)
commandBuffer?.present(drawable!)
// Commit the command buffer for execution as soon as possible
commandBuffer?.commit()