Skip to content
Graham Wakefield edited this page May 17, 2021 · 12 revisions

Built on WebGL

Probably the widest-used library for working with WebGL. Whereas the WebGL (and OpenGL) APIs are relatively low-level and state-based, Three.js offers a more mid-level object-based interface, including a "scene-graph".

Note: WebGL2 support is still "in progress" and quite a few features don't seem to be active yet (late 2018).

Minimal Three.js example

At the heart of any Three.js project is an animation loop in which a THREE.WebGLRenderer takes a THREE.Camera and a THREE.Scene to actually draw to the screen. Properties of the camera and members of the scene may also be modified within the animation loop:

function render() {
	// update members & properties of the scene here for animation
	// TODO

	// now render the scene:
	renderer.render(scene, camera);
}
renderer.setAnimationLoop(render);

Before this loop there will be setup code to define the renderer, the camera, and the scene. Three.js' ontology is roughly as follows:

  • Renderer (THREE.WebGLRenderer)
  • Camera (THREE.PerspectiveCamera, THREE.OrthographicCamera, THREE.StereoCamera, etc.)
  • Scene (THREE.Scene)
    • Meshes (THREE.Mesh or THREE.SkinnedMesh)
      • Geometry (THREE has lots of built-in geometry types from simple BoxGeometry to arbitrary BufferGeometry)
      • Material (THREE has lots of built-in material types from MeshBasicMaterial to customized ShaderMaterial)
    • Lights (THREE has several light types to choose from)
    • Possibly other scene entities

A brief example:

let renderer = new THREE.WebGLRenderer({
	// various options here
	// render to the HTML element <canvas id="webgl-canvas"></canvas> in your page:
	canvas: document.getElementById("webgl-canvas"),
});

// create camera
// (there are other kinds of cameras available, but this is a typical example)
let camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 );
// configure camera, e.g. move it back from the origin:
camera.position.z = 5;

let scene = new THREE.Scene();
// configure scene

// e.g. create a Mesh, 
// from a Geometry (the actual shape, defined by built-in shapes, or by writing Buffers) 
// and a Material (how it looks, defined by buit-in Material types, or by writing Shaders)
// and add to the scene:
var geometry = new THREE.BoxGeometry(1, 1, 1);
var material = new THREE.MeshPhongMaterial({ 
	color: "#fff", 
	flatShading: true, 
	overdraw: 0.5, 
	shininess: 0 
});
var cube = new THREE.Mesh( geometry, material );
scene.add( cube );

// e.g. add some lights to the scene:
let ambientLight = new THREE.AmbientLight("#666");
scene.add( ambientLight );

let directionalLight = new THREE.DirectionalLight("#fff", 0.5);
directionalLight.position.set(3, 2, 1);
scene.add( directionalLight );

function render() {
	// update members & properties of the scene here for animation
	// e.g.
	cube.rotation.x += 0.01;
	cube.rotation.y += 0.01;

	// now render the scene:
	renderer.render(scene, camera);
}
renderer.setAnimationLoop(render);

A typical resize function might look like this:

renderer.setSize( window.innerWidth, window.innerHeight );
window.addEventListener('resize', function() {
	const w = window.innerWidth, h = window.innerHeight;
	renderer.setSize(w, h);
	camera.aspect = w / h;
	camera.updateProjectionMatrix();
});

Live Coding Three.js? In VR?

no text input in VR

From within VR, a code-oriented interface is almost unworkable. Some kind of visual interface would make more sense: editing in terms of structural components, intentions, relations, flows etc. rather than JS directly. In concept this is certainly feasible: A-Frame does the very same thing, using a DOM interface to generate Three.js code.

Perhaps in a form that can still be code-edited from desktop experiences. That means a projectional editor. (https://www.martinfowler.com/bliki/ProjectionalEditing.html, https://en.wikipedia.org/wiki/Structure_editor).

What level of abstraction?

iframe option has advantage of no leaky state (instead we have to provide serialization/deserialization to preserve state if & as desired). Any preference from VR perspective?

Clone this wiki locally