我正在使用Point,InstancedBufferGeometry和RawShaderMaterial的场景中工作。我想在场景中添加光线投射,这样,当单击某个点时,我可以确定单击了哪个点。
在先前的场景[example中,我已经能够通过访问.index
调用返回的匹配项的raycaster.intersectObject()
属性来确定单击了哪个点。但是,在下面的几何图形和材质的情况下,索引始终为0
。
有人知道我如何确定在下面的场景中单击了哪些点吗?其他人在这个问题上可以提供的任何帮助,将不胜感激。
html, body { width: 100%; height: 100%; background: #000; }
body { margin: 0; overflow: hidden; }
canvas { width: 100%; height: 100%; }
<script src='https://cdnjs.cloudflare.com/ajax/libs/three.js/88/three.min.js'></script>
<script src='https://rawgit.com/YaleDHLab/pix-plot/master/assets/js/trackball-controls.js'></script>
<script type='x-shader/x-vertex' id='vertex-shader'>
/**
* The vertex shader's main() function must define `gl_Position`,
* which describes the position of each vertex in screen coordinates.
*
* To do so, we can use the following variables defined by Three.js:
* attribute vec3 position - stores each vertex's position in world space
* attribute vec2 uv - sets each vertex's the texture coordinates
* uniform mat4 projectionMatrix - maps camera space into screen space
* uniform mat4 modelViewMatrix - combines:
* model matrix: maps a point's local coordinate space into world space
* view matrix: maps world space into camera space
*
* `attributes` can vary from vertex to vertex and are defined as arrays
* with length equal to the number of vertices. Each index in the array
* is an attribute for the corresponding vertex. Each attribute must
* contain n_vertices * n_components, where n_components is the length
* of the given datatype (e.g. for a vec2, n_components = 2; for a float,
* n_components = 1)
* `uniforms` are constant across all vertices
* `varyings` are values passed from the vertex to the fragment shader
*
* For the full list of uniforms defined by three, see:
* https://threejs.org/docs/#api/renderers/webgl/WebGLProgram
**/
// set float precision
precision mediump float;
// specify geometry uniforms
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
// to get the camera attributes:
uniform vec3 cameraPosition;
// blueprint attributes
attribute vec3 position; // sets the blueprint's vertex positions
// instance attributes
attribute vec3 translation; // x y translation offsets for an instance
void main() {
// set point position
vec3 pos = position + translation;
vec4 projected = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
gl_Position = projected;
// use the delta between the point position and camera position to size point
float xDelta = pow(projected[0] - cameraPosition[0], 2.0);
float yDelta = pow(projected[1] - cameraPosition[1], 2.0);
float zDelta = pow(projected[2] - cameraPosition[2], 2.0);
float delta = pow(xDelta + yDelta + zDelta, 0.5);
gl_PointSize = 10000.0 / delta;
}
</script>
<script type='x-shader/x-fragment' id='fragment-shader'>
/**
* The fragment shader's main() function must define `gl_FragColor`,
* which describes the pixel color of each pixel on the screen.
*
* To do so, we can use uniforms passed into the shader and varyings
* passed from the vertex shader.
*
* Attempting to read a varying not generated by the vertex shader will
* throw a warning but won't prevent shader compiling.
**/
precision highp float;
void main() {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
</script>
<script>
/**
* Generate a scene object with a background color
**/
function getScene() {
var scene = new THREE.Scene();
scene.background = new THREE.Color(0xaaaaaa);
return scene;
}
/**
* Generate the camera to be used in the scene. Camera args:
* [0] field of view: identifies the portion of the scene
* visible at any time (in degrees)
* [1] aspect ratio: identifies the aspect ratio of the
* scene in width/height
* [2] near clipping plane: objects closer than the near
* clipping plane are culled from the scene
* [3] far clipping plane: objects farther than the far
* clipping plane are culled from the scene
**/
function getCamera() {
var aspectRatio = window.innerWidth / window.innerHeight;
var camera = new THREE.PerspectiveCamera(75, aspectRatio, 0.1, 100000);
camera.position.set(0, 1, -6000);
return camera;
}
/**
* Generate the renderer to be used in the scene
**/
function getRenderer() {
// Create the canvas with a renderer
var renderer = new THREE.WebGLRenderer({antialias: true});
// Add support for retina displays
renderer.setPixelRatio(window.devicePixelRatio);
// Specify the size of the canvas
renderer.setSize(window.innerWidth, window.innerHeight);
// Add the canvas to the DOM
document.body.appendChild(renderer.domElement);
return renderer;
}
/**
* Generate the controls to be used in the scene
* @param {obj} camera: the three.js camera for the scene
* @param {obj} renderer: the three.js renderer for the scene
**/
function getControls(camera, renderer) {
var controls = new THREE.TrackballControls(camera, renderer.domElement);
controls.zoomSpeed = 0.4;
controls.panSpeed = 0.4;
return controls;
}
/**
* Set the current mouse coordinates {-1:1}
* @param {Event} event - triggered on canvas mouse move
**/
function onMousemove(event) {
mouse.x = ( event.clientX / window.innerWidth ) * 2 - 1;
mouse.y = - ( event.clientY / window.innerHeight ) * 2 + 1;
}
/**
* Store the previous mouse position so that when the next
* click event registers we can tell whether the user
* is clicking or dragging.
* @param {Event} event - triggered on canvas mousedown
**/
function onMousedown(event) {
lastMouse.copy(mouse);
}
/**
* Callback for mouseup events on the window. If the user
* clicked an image, zoom to that image.
* @param {Event} event - triggered on canvas mouseup
**/
function onMouseup(event) {
var selected = raycaster.intersectObjects(scene.children);
console.log(selected)
}
// add event listeners for the canvas
function addCanvasListeners() {
var canvas = document.querySelector('canvas');
canvas.addEventListener('mousemove', onMousemove, false)
canvas.addEventListener('mousedown', onMousedown, false)
canvas.addEventListener('mouseup', onMouseup, false)
}
/**
* Generate the points for the scene
* @param {obj} scene: the current scene object
**/
function addPoints(scene) {
// this geometry builds a blueprint and many copies of the blueprint
var geometry = new THREE.InstancedBufferGeometry();
geometry.addAttribute( 'position',
new THREE.BufferAttribute( new Float32Array( [0, 0, 0] ), 3));
// add data for each observation
var n = 10000; // number of observations
var rootN = n**(1/2);
var cellSize = 20;
var translation = new Float32Array( n * 3 );
var translationIterator = 0;
var unit = 0;
for (var i=0; i<n*3; i++) {
switch (i%3) {
case 0: // x dimension
translation[translationIterator++] = (unit % rootN) * cellSize;
break;
case 1: // y dimension
translation[translationIterator++] = Math.floor(unit / rootN) * cellSize;
break;
case 2: // z dimension
translation[translationIterator++] = 0;
break;
}
if (i % 3 == 0) unit++;
}
geometry.addAttribute( 'translation',
new THREE.InstancedBufferAttribute( translation, 3, 1 ) );
var material = new THREE.RawShaderMaterial({
vertexShader: document.getElementById('vertex-shader').textContent,
fragmentShader: document.getElementById('fragment-shader').textContent,
});
var mesh = new THREE.Points(geometry, material);
mesh.frustumCulled = false; // prevent the mesh from being clipped on drag
scene.add(mesh);
}
/**
* Render!
**/
function render() {
requestAnimationFrame(render);
renderer.render(scene, camera);
controls.update();
};
/**
* Main
**/
var scene = getScene();
var camera = getCamera();
var renderer = getRenderer();
var controls = getControls(camera, renderer);
// raycasting
var raycaster = new THREE.Raycaster();
raycaster.params.Points.threshold = 10000;
var mouse = new THREE.Vector2();
var lastMouse = new THREE.Vector2();
addCanvasListeners();
// main
addPoints(scene);
render();
</script>
答案 0 :(得分:3)
一种解决方案是使用有时称为 GPU Picking 的技术。
初次研究https://threejs.org/examples/webgl_interactive_cubes_gpu.html。
一旦您了解了这个概念,请研究https://threejs.org/examples/webgl_interactive_instances_gpu.html。
另一种解决方案是在CPU上复制在GPU上实现的实例化逻辑。您可以使用raycast()
方法进行操作。是否值得,取决于您的用例的复杂性。
three.js r.95
答案 1 :(得分:0)
以防将来其他人最终陷入困境,这是在这种情况下有效的简要概述[block]:
/**
* Generate a scene object with a background color
**/
function getScene() {
var scene = new THREE.Scene();
scene.background = new THREE.Color(0xaaaaaa);
return scene;
}
/**
* Generate the camera to be used in the scene. Camera args:
* [0] field of view: identifies the portion of the scene
* visible at any time (in degrees)
* [1] aspect ratio: identifies the aspect ratio of the
* scene in width/height
* [2] near clipping plane: objects closer than the near
* clipping plane are culled from the scene
* [3] far clipping plane: objects farther than the far
* clipping plane are culled from the scene
**/
function getCamera() {
var aspectRatio = window.innerWidth / window.innerHeight;
var camera = new THREE.PerspectiveCamera(75, aspectRatio, 0.1, 100000);
camera.position.set(0, 1, -6000);
return camera;
}
/**
* Generate the renderer to be used in the scene
**/
function getRenderer() {
// Create the canvas with a renderer
var renderer = new THREE.WebGLRenderer({antialias: true});
// Add support for retina displays
renderer.setPixelRatio(window.devicePixelRatio);
// Specify the size of the canvas
renderer.setSize(window.innerWidth, window.innerHeight);
// Add the canvas to the DOM
document.body.appendChild(renderer.domElement);
return renderer;
}
/**
* Generate the controls to be used in the scene
* @param {obj} camera: the three.js camera for the scene
* @param {obj} renderer: the three.js renderer for the scene
**/
function getControls(camera, renderer) {
var controls = new THREE.TrackballControls(camera, renderer.domElement);
controls.zoomSpeed = 0.4;
controls.panSpeed = 0.4;
return controls;
}
/**
* Generate the points for the scene
* @param {obj} scene: the current scene object
**/
function addPoints(scene) {
// this geometry builds a blueprint and many copies of the blueprint
var geometry = new THREE.InstancedBufferGeometry();
var BA = THREE.BufferAttribute;
var IBA = THREE.InstancedBufferAttribute;
// add data for each observation
var n = 10000; // number of observations
var rootN = n**(1/2);
var unit = 0;
var cellSize = 20;
var color = new THREE.Color();
var translations = new Float32Array( n * 3 );
var colors = new Float32Array( n * 3 );
var translationIterator = 0;
var colorIterator = 0;
for (var i=0; i<n; i++) {
var rgb = color.setHex(i+1);
translations[translationIterator++] = (i % rootN) * cellSize;
translations[translationIterator++] = Math.floor(i / rootN) * cellSize;
translations[translationIterator++] = 0;
colors[colorIterator++] = rgb.r;
colors[colorIterator++] = rgb.g;
colors[colorIterator++] = rgb.b;
}
var positionAttr = new BA( new Float32Array( [0, 0, 0] ), 3);
var translationAttr = new IBA(translations, 3, 1);
var colorAttr = new IBA(colors, 3, 1);
geometry.addAttribute('position', positionAttr);
geometry.addAttribute('translation', translationAttr);
geometry.addAttribute('color', colorAttr);
var material = getMaterial({useColors: 1.0});
var mesh = new THREE.Points(geometry, material);
mesh.frustumCulled = false; // prevent the mesh from being clipped on drag
scene.add(mesh);
pickingScene.add(mesh.clone());
}
function getMaterial(obj) {
var material = new THREE.RawShaderMaterial({
uniforms: {
useColor: {
type: 'f',
value: obj.useColors,
}
},
vertexShader: document.getElementById('vertex-shader').textContent,
fragmentShader: document.getElementById('fragment-shader').textContent,
});
return material;
}
/**
* Render!
**/
function render() {
requestAnimationFrame(render);
renderer.render(scene, camera);
controls.update();
};
/**
* Main
**/
var scene = getScene();
var camera = getCamera();
var renderer = getRenderer();
var controls = getControls(camera, renderer);
// picking
var w = window.innerWidth;
var h = window.innerHeight;
var pickingScene = new THREE.Scene();
pickingTexture = new THREE.WebGLRenderTarget(w, h);
pickingTexture.texture.minFilter = THREE.LinearFilter;
var mouse = new THREE.Vector2();
var canvas = document.querySelector('canvas');
canvas.addEventListener('mousemove', function(e) {
renderer.render(pickingScene, camera, pickingTexture);
var pixelBuffer = new Uint8Array(4);
renderer.readRenderTargetPixels(
pickingTexture, e.clientX, pickingTexture.height - e.clientY,
1, 1, pixelBuffer );
var id = (pixelBuffer[0]<<16)|(pixelBuffer[1]<<8)|(pixelBuffer[2]);
if (id) {
console.log(id, pixelBuffer);
var elem = document.querySelector('#selected');
elem.textContent = 'You are hovering on element number ' + (id-1);
}
})
// main
addPoints(scene);
render();
html, body { width: 100%; height: 100%; background: #000; }
body { margin: 0; overflow: hidden; }
canvas { width: 100%; height: 100%; }
#selected { position: absolute; top: 10; left: 10; }
<script src='https://cdnjs.cloudflare.com/ajax/libs/three.js/95/three.min.js'></script>
<script src='https://rawgit.com/YaleDHLab/pix-plot/master/assets/js/trackball-controls.js'></script>
<div id='selected'></div>
<script type='x-shader/x-vertex' id='vertex-shader'>
/**
* The vertex shader's main() function must define `gl_Position`,
* which describes the position of each vertex in screen coordinates.
*
* To do so, we can use the following variables defined by Three.js:
* attribute vec3 position - stores each vertex's position in world space
* attribute vec2 uv - sets each vertex's the texture coordinates
* uniform mat4 projectionMatrix - maps camera space into screen space
* uniform mat4 modelViewMatrix - combines:
* model matrix: maps a point's local coordinate space into world space
* view matrix: maps world space into camera space
*
* `attributes` can vary from vertex to vertex and are defined as arrays
* with length equal to the number of vertices. Each index in the array
* is an attribute for the corresponding vertex. Each attribute must
* contain n_vertices * n_components, where n_components is the length
* of the given datatype (e.g. for a vec2, n_components = 2; for a float,
* n_components = 1)
* `uniforms` are constant across all vertices
* `varyings` are values passed from the vertex to the fragment shader
*
* For the full list of uniforms defined by three, see:
* https://threejs.org/docs/#api/renderers/webgl/WebGLProgram
**/
precision mediump float;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform vec3 cameraPosition;
attribute vec3 position; // blueprint's vertex positions
attribute vec3 color; // only used for raycasting
attribute vec3 translation; // x y translation offsets for an instance
varying vec3 vColor;
void main() {
vColor = color;
// set point position
vec3 pos = position + translation;
vec4 projected = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
gl_Position = projected;
// use the delta between the point position and camera position to size point
float xDelta = pow(projected[0] - cameraPosition[0], 2.0);
float yDelta = pow(projected[1] - cameraPosition[1], 2.0);
float zDelta = pow(projected[2] - cameraPosition[2], 2.0);
float delta = pow(xDelta + yDelta + zDelta, 0.5);
gl_PointSize = 10000.0 / delta;
}
</script>
<script type='x-shader/x-fragment' id='fragment-shader'>
/**
* The fragment shader's main() function must define `gl_FragColor`,
* which describes the pixel color of each pixel on the screen.
*
* To do so, we can use uniforms passed into the shader and varyings
* passed from the vertex shader.
*
* Attempting to read a varying not generated by the vertex shader will
* throw a warning but won't prevent shader compiling.
**/
precision highp float;
varying vec3 vColor;
uniform float useColor;
void main() {
if (useColor == 1.) {
gl_FragColor = vec4(vColor, 1.0);
} else {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
}
</script>