简短问题:如何将纹理列表传递给着色器并访问片段着色器中的第n个纹理(其中n是作为从顶点着色器变化的值传递的值)?
更长的问题:我正在研究代表多个图像的Three.js scene。每个图像使用多个纹理中的一个,每个纹理都是包含多个缩略图的图集。我正在努力实现自定义shaderMaterial以优化性能,但我对如何在着色器中使用多个纹理感到困惑。
我的目标是传递一个纹理列表和一个表示每个纹理顶点数的数字,以便我可以识别应该用于每个图像的顶点/像素的纹理。我想我可以通过传递以下数据来实现这一目标:
// Create a texture loader so we can load our image file
var loader = new THREE.TextureLoader();
// specify the url to the texture
var catUrl = 'https://s3.amazonaws.com/duhaime/blog/tsne-webgl/assets/cat.jpg';
var dogUrl = 'https://s3.amazonaws.com/duhaime/blog/tsne-webgl/assets/dog.jpg';
var material = new THREE.ShaderMaterial({
uniforms: {
verticesPerTexture: new Float32Array([4.0]), // count of vertices per texture
textures: {
type: 'tv', // type for texture array
value: [loader.load(catUrl), loader.load(dogUrl)],
}
},
vertexShader: document.getElementById('vertex-shader').textContent,
fragmentShader: document.getElementById('fragment-shader').textContent
});
但是,如果我这样做,顶点着色器似乎无法使用制服告诉片段着色器它应该使用哪个纹理,因为顶点着色器显然无法将sampler2d对象作为片段着色器的变化传递。如何将纹理列表传递给着色器?
完整代码(未成功传递纹理列表):
/**
* Generate a scene object with a background color
**/
function getScene() {
var scene = new THREE.Scene();
scene.background = new THREE.Color(0xffffff);
return scene;
}
/**
* Generate the camera to be used in the scene. Camera args:
* [0] field of view: identifies the portion of the scene
* visible at any time (in degrees)
* [1] aspect ratio: identifies the aspect ratio of the
* scene in width/height
* [2] near clipping plane: objects closer than the near
* clipping plane are culled from the scene
* [3] far clipping plane: objects farther than the far
* clipping plane are culled from the scene
**/
function getCamera() {
var aspectRatio = window.innerWidth / window.innerHeight;
var camera = new THREE.PerspectiveCamera(75, aspectRatio, 0.1, 1000);
camera.position.set(0, 1, 10);
return camera;
}
/**
* Generate the renderer to be used in the scene
**/
function getRenderer() {
// Create the canvas with a renderer
var renderer = new THREE.WebGLRenderer({antialias: true});
// Add support for retina displays
renderer.setPixelRatio(window.devicePixelRatio);
// Specify the size of the canvas
renderer.setSize(window.innerWidth, window.innerHeight);
// Add the canvas to the DOM
document.body.appendChild(renderer.domElement);
return renderer;
}
/**
* Generate the controls to be used in the scene
* @param {obj} camera: the three.js camera for the scene
* @param {obj} renderer: the three.js renderer for the scene
**/
function getControls(camera, renderer) {
var controls = new THREE.TrackballControls(camera, renderer.domElement);
controls.zoomSpeed = 0.4;
controls.panSpeed = 0.4;
return controls;
}
/**
* Load image
**/
function loadImage() {
var geometry = new THREE.BufferGeometry();
/*
Now we need to push some vertices into that geometry to identify the coordinates the geometry should cover
*/
// Identify the image size
var imageSize = {width: 10, height: 7.5};
// Identify the x, y, z coords where the image should be placed
var coords = {x: -5, y: -3.75, z: 0};
// Add one vertex for each corner of the image, using the
// following order: lower left, lower right, upper right, upper left
var vertices = new Float32Array([
coords.x, coords.y, coords.z, // bottom left
coords.x+imageSize.width, coords.y, coords.z, // bottom right
coords.x+imageSize.width, coords.y+imageSize.height, coords.z, // upper right
coords.x, coords.y+imageSize.height, coords.z, // upper left
])
// set the uvs for this box; these identify the following corners:
// lower-left, lower-right, upper-right, upper-left
var uvs = new Float32Array([
0.0, 0.0,
1.0, 0.0,
1.0, 1.0,
0.0, 1.0,
])
// store the texture index of each object to be rendered
var textureIndices = new Float32Array([0.0, 0.0, 0.0, 0.0]);
// indices = sequence of index positions in `vertices` to use as vertices
// we make two triangles but only use 4 distinct vertices in the object
// the second argument to THREE.BufferAttribute is the number of elements
// in the first argument per vertex
geometry.setIndex([0,1,2, 2,3,0])
geometry.addAttribute('position', new THREE.BufferAttribute(vertices, 3));
geometry.addAttribute('uv', new THREE.BufferAttribute(uvs, 2));
// Create a texture loader so we can load our image file
var loader = new THREE.TextureLoader();
// specify the url to the texture
var catUrl = 'https://s3.amazonaws.com/duhaime/blog/tsne-webgl/assets/cat.jpg';
var dogUrl = 'https://s3.amazonaws.com/duhaime/blog/tsne-webgl/assets/dog.jpg';
// specify custom uniforms and attributes for shaders
// Uniform types: https://github.com/mrdoob/three.js/wiki/Uniforms-types
var material = new THREE.ShaderMaterial({
uniforms: {
verticesPerTexture: new Float32Array([4.0]), // store the count of vertices per texture
cat_texture: {
type: 't',
value: loader.load(catUrl),
},
dog_texture: {
type: 't',
value: loader.load(dogUrl),
},
textures: {
type: 'tv', // type for texture array
value: [loader.load(catUrl), loader.load(dogUrl)],
}
},
vertexShader: document.getElementById('vertex-shader').textContent,
fragmentShader: document.getElementById('fragment-shader').textContent
});
// Combine our image geometry and material into a mesh
var mesh = new THREE.Mesh(geometry, material);
// Set the position of the image mesh in the x,y,z dimensions
mesh.position.set(0,0,0)
// Add the image to the scene
scene.add(mesh);
}
/**
* Render!
**/
function render() {
requestAnimationFrame(render);
renderer.render(scene, camera);
controls.update();
};
var scene = getScene();
var camera = getCamera();
var renderer = getRenderer();
var controls = getControls(camera, renderer);
loadImage();
render();
html, body { width: 100%; height: 100%; background: #000; }
body { margin: 0; overflow: hidden; }
canvas { width: 100%; height: 100%; }
<script src='https://cdnjs.cloudflare.com/ajax/libs/three.js/92/three.min.js'></script>
<script src='https://threejs.org/examples/js/controls/TrackballControls.js'></script>
<script type='x-shader/x-vertex' id='vertex-shader'>
/**
* The vertex shader's main() function must define `gl_Position`,
* which describes the position of each vertex in the space.
*
* To do so, we can use the following variables defined by Three.js:
*
* uniform mat4 modelViewMatrix - combines:
* model matrix: maps a point's local coordinate space into world space
* view matrix: maps world space into camera space
*
* uniform mat4 projectionMatrix - maps camera space into screen space
*
* attribute vec3 position - sets the position of each vertex
*
* attribute vec2 uv - determines the relationship between vertices and textures
*
* `uniforms` are constant across all vertices
*
* `attributes` can vary from vertex to vertex and are defined as arrays
* with length equal to the number of vertices. Each index in the array
* is an attribute for the corresponding vertex
*
* `varyings` are values passed from the vertex to the fragment shader
*
* Specifying attributes that are not passed to the vertex shader will not pevent shader compiling
**/
// declare uniform vals
uniform float verticesPerTexture; // store the vertices per texture
// declare variables to pass to fragment shaders
varying vec2 vUv; // pass the uv coordinates of each vertex to the frag shader
varying float textureIndex; // pass the texture idx
// initialize counters
float vertexIdx = 0.0; // stores the index position of the current vertex
float textureIdx = 1.0; // store the index position of the current texture
void main() {
// keep track of which texture each vertex belongs to
vertexIdx = vertexIdx + 1.0;
if (vertexIdx == verticesPerTexture) {
textureIdx = textureIdx + 1.0;
vertexIdx = 0.0;
}
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
</script>
<script type='x-shader/x-fragment' id='fragment-shader'>
/**
* The fragment shader's main() function must define `gl_FragColor`,
* which describes the pixel color of each pixel on the screen.
*
* To do so, we can use uniforms passed into the shader and varyings
* passed from the vertex shader
*
* Attempting to read a varying not generated by the vertex shader will
* throw a warning but won't prevent shader compiling
*
* Each attribute must contain n_vertices * n_components, where n_components
* is the length of the given datatype (e.g. vec2 n_components = 2;
* float n_components = 1)
**/
precision highp float; // set float precision (optional)
varying vec2 vUv; // identify the uv values as a varying attribute
varying float textureIndex; // identify the texture indices as a varying attribute
uniform sampler2D cat_texture; // identify the texture as a uniform argument
uniform sampler2D dog_texture; // identify the texture as a uniform argument
//uniform sampler2D textures;
// TODO pluck out textures[textureIndex];
//uniform sampler2D textures[int(textureIndex)];
void main() {
int textureIdx = int(textureIndex);
// float point arithmetic prevents strict equality checking
if ( (textureIndex - 1.0) < 0.1 ) {
gl_FragColor = texture2D(cat_texture, vUv);
} else {
gl_FragColor = texture2D(dog_texture, vUv);
}
}
</script>
答案 0 :(得分:1)
已经睡过了,这是您可以尝试的另一种方法,更类似于您使用内置材料的方式:
output[["yearOfInterest"]]
答案 1 :(得分:0)
你已经编写了顶点着色器,好像main
是一个for循环,它会遍历所有顶点,并在它进行时更新vertexIdx
和textureIdx
,但这不是着色器如何工作。着色器并行运行,同时处理每个顶点。因此,您无法分享着色器使用另一个顶点计算一个顶点的内容。
改为使用几何体上的属性:
geometry.addAttribute( 'texIndex', new THREE.BufferAttribute( [ 0, 0, 0, 0, 1, 1, 1, 1 ], 1 ) )
我在这里得到了一些深度,但我认为你然后通过顶点着色器将其传递给变化:
attribute int texIndex;
varying int vTexIndex;
void main () { vTexIndex = texIndex; }
最后,在片段着色器中:
varying int vTexIndex;
uniform sampler2D textures[ 2 ];
...
sampler2D tex = textures[ vTexIndex ];