SceneKit中的折射可能吗?

时间:2015-07-07 18:00:46

标签: ios scenekit

是否可以制作一个穿过光线的形状,这样你可以通过屈光而弯曲的光线看到它?像镜头或玻璃(或水)?

2 个答案:

答案 0 :(得分:4)

要通过SceneKit实现折射,您将需要一个SCNProgram。内置的着色器无法执行任何折射操作。

基于本文(Which are the right Matrix Values to use in a metal shader passed by a SCNProgram to get a correct chrome like reflection)的答案,可以通过SceneKit这样实现折射效果。 (此示例基于ARKit)

您需要:

  • 迅速
  • SceneKit / ARKit
  • SkyBox(我们需要折射和反射的东西)
  • SCNProgram
  • 金属着色器
  • 物理设备(推荐)

制作一个新的ARKit(SceneKit)项目,创建或加载类似球形的几何对象并将其放置在空间中。 (SCNSphere可以)

安装天空盒。确保您的Skybox由6个单独的图像组成(“ Cube Map”)(多维数据集贴图)-不要使用2:1球体贴图,它们似乎不适用于Metal Shader中的采样器。这是天空盒的好链接:(https://www.humus.name

制作六个必需的UIImage,用于保存天空盒的单个图片,如下所示:

let skybox1 = UIImage.init(named: "art.scnassets/subdir/image-PX.png")
let skybox2 = UIImage.init(named: "art.scnassets/subdir/image-NX.png")
let skybox3 = UIImage.init(named: "art.scnassets/subdir/image-PY.png")
let skybox4 = UIImage.init(named: "art.scnassets/subdir/image-NY.png")
let skybox5 = UIImage.init(named: "art.scnassets/subdir/image-PZ.png")
let skybox6 = UIImage.init(named: "art.scnassets/subdir/image-NZ.png")

图像必须是正方形,并且应该是2的幂(为了达到最佳的贴图目的)。 512x512会很好,1024x1024已经需要很多内存

制作一个SCNMaterialProperty,它为天空盒保存单个图像的数组,如下所示:

// Cube-Map Structure:
//      PY
//  NX  PZ  PX  NZ
//      NY

// Array Order:
// PX, NX, PY, NY, PZ, NZ

let envMapSkyboxMaterialProperty = SCNMaterialProperty(contents: [skybox1!,skybox2!,skybox3!,skybox4!,skybox5!,skybox6!])

(P =正,N =负)

然后将Skybox设置如下:(我们需要此功能用于场景的反射,折射背景和照明*)

myScene.background.contents = envMapSkyboxMaterialProperty?.contents

还要设置照明环境**。

myScene.lightingEnvironment.contents = envMapSkyboxMaterialProperty?.contents

假设,现在您可以使用默认材质将几何对象放置在空间中-现在,我们准备将SCNProgram与用于光折射的特殊金属着色器对齐。

制作SCNProgram并进行如下配置:

let sceneProgramRefract = SCNProgram()
sceneProgramRefract.vertexFunctionName   = "myVertexRefract" // (myVertexRefract is the Keyword used in the shader)
sceneProgramRefract.fragmentFunctionName = "myFragmentRefract" // (myFragmentRefract is the Keyword used in the shader)

在目标Geometry-Node的材质上附加SCNProgram,如下所示:

firstMaterial.program = sceneProgramRefract // doing this will replace the entire built-in SceneKit shaders for that object.
firstMaterial.setValue(envMapSkyboxMaterialProperty, forKey: "cubeTexture") // (cubeTexture is the Keyword used in the shader to access the Skybox)

在您的项目中添加一个新的Metal文件,并将其命名为“ shaders.metal”

使用以下方法替换Metal文件中的所有内容:

// Default Metal Header for SCNProgram
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>

// Default Sampler for the Skybox
constexpr sampler cubeSampler;


// Nodebuffer (you only need the enabled Matrix floats)
struct MyNodeBuffer {
    // float4x4 modelTransform;
    // float4x4 inverseModelTransform;
    float4x4 modelViewTransform; // required
    // float4x4 inverseModelViewTransform;
    float4x4 normalTransform; // required
    // float4x4 modelViewProjectionTransform;
    // float4x4 inverseModelViewProjectionTransform;
};

// Input Struct
typedef struct {
    float3 position [[ attribute(SCNVertexSemanticPosition) ]];
    float3 normal   [[ attribute(SCNVertexSemanticNormal)   ]];
} MyVertexInput;

// Struct filled by the Vertex Shader
struct SimpleVertexRefract
{
    float4 position [[position]];
    float  k;
    float3 worldSpaceReflection;
    float3 worldSpaceRefraction;
};

// VERTEX SHADER
vertex SimpleVertexRefract myVertexRefract(MyVertexInput in [[stage_in]],
                                          constant SCNSceneBuffer& scn_frame [[buffer(0)]],
                                          constant MyNodeBuffer& scn_node [[buffer(1)]])
{
float4 modelSpacePosition(in.position, 1.0f);
float4 modelSpaceNormal(in.normal, 0.0f);

// We'll be computing the reflection in eye space, so first we find the eye-space
// position. This is also used to compute the clip-space position below.
float4 eyeSpacePosition         = scn_node.modelViewTransform * modelSpacePosition;

// We compute the eye-space normal in the usual way.
float3 eyeSpaceNormal           = (scn_node.normalTransform * modelSpaceNormal).xyz;

// The view vector in eye space is just the vector from the eye-space position.
float3 eyeSpaceViewVector       = normalize(-eyeSpacePosition.xyz);

float3 view_vec                 = normalize(eyeSpaceViewVector);
float3 normal                   = normalize(eyeSpaceNormal);

const float ETA                 = 1.12f; // (this defines the intensity of the refraction. 1.0 will be no refraction)
float c                         = dot(view_vec, normal);
float d                         = ETA * c;
float k                         = clamp(d * d + (1.0f - ETA * ETA), 0.0f, 1.0f); // k is used in the fragment shader

// for Reflection / Refraction
// To find the reflection/refraction vector, we reflect/refract the (inbound) view vector about the normal.
float4 eyeSpaceReflection       = float4(reflect(-eyeSpaceViewVector, eyeSpaceNormal), 0.0f);
float4 eyeSpaceRefraction       = float4(refract(-eyeSpaceViewVector, eyeSpaceNormal, ETA), 0.0f);

// To sample the cube-map, we want a world-space reflection vector, so multiply
// by the inverse view transform to go back from eye space to world space.
float3 worldSpaceReflection     = (scn_frame.inverseViewTransform * eyeSpaceReflection).xyz;
float3 worldSpaceRefraction     = (scn_frame.inverseViewTransform * eyeSpaceRefraction).xyz;

// Fill the Out-Struct
SimpleVertexRefract out;
out.position                    = scn_frame.projectionTransform * eyeSpacePosition;
out.k                           = k;
out.worldSpaceReflection        = worldSpaceReflection; //
out.worldSpaceRefraction        = worldSpaceRefraction; //
return out;
}

// FRAGMENT SHADER
fragment float4 myFragmentRefract(SimpleVertexRefract in [[stage_in]],
                                  texturecube<float, access::sample> cubeTexture [[texture(0)]])
{
// Since the reflection vector's length will vary under interpolation, we normalize it
// and flip it from the assumed right-hand space of the world to the left-hand space
// of the interior of the cubemap.
float3 worldSpaceReflection     = normalize(in.worldSpaceReflection) * float3(1.0f, 1.0f, -1.0f);
float3 worldSpaceRefraction     = normalize(in.worldSpaceRefraction) * float3(1.0f, 1.0f, -1.0f);

float3 reflection               = cubeTexture.sample(cubeSampler, worldSpaceReflection).rgb;
float3 refraction               = cubeTexture.sample(cubeSampler, worldSpaceRefraction).rgb;

float4 color;
color.rgb                       = mix(reflection, refraction, float3(in.k)); // this is where k is finally used
color.a                         = 1.0f;
return color;
}

编译并运行。效果应如下所示:

Figure 1

**如果您使用AR场景-设置Skybox会覆盖当前的摄像机源,则可能需要在设置Skybox之前将AR Feed备份到其他位置,如下所示: 进行全局定义:

var originalARSource : Any? = nil // screen Scene Backup
originalARSource = myScene.background.contents

您可以通过将myScene.background.contents设置回originalARSource来跳回AR提要

**在ARKit中,请确保在激活Skybox时将“跟踪配置”设置为.none:

configuration.environmentTexturing = .none

答案 1 :(得分:2)

是的,物理的神奇力量一切皆有可能!您需要创建自己的着色器。来自Wikipedia

  

在计算机图形领域,着色器是一种计算机程序   用来做阴影:产生适当水平的颜色   在一个图像中,或者在现代时代,也要产生特殊的   效果或视频后期处理。外行人的术语定义   可以作为&#34;一个告诉计算机如何绘制的程序   某种特定而独特的方式&#34;。

如果您有兴趣,

objc.io在SceneKit上有一个great tutorial