three js draw 2d box around 3d objects
Affiliate four. Graphics and Rendering in Three.js
In this chapter, we will bout the extensive set of features 3.js provides for drawing graphics and rendering scenes. If you are new to 3D programming, don't expect to comprehend all of the topics in this chapter right away. Just if yous take them one at a fourth dimension and piece of work through the lawmaking samples, you could exist well on your manner to building great WebGL sites using the power of Three.js.
Iii.js has a rich graphics arrangement, inspired by many 3D libraries that take come up before and informed by the collective experience of its authors. Three.js provides the features ane comes to expect from 3D libraries, and then some: 2d and 3D geometry built from polygonal meshes; a scene graph with hierarchal objects and transformations; materials, textures, and lights; existent-time shadows; user-defined programmable shaders; and a flexible rendering arrangement that enables multipass and deferred techniques for avant-garde special furnishings.
Geometry and Meshes
1 of the major benefits of using Iii.js over coding straight to the WebGL API is the piece of work it saves united states in creating and cartoon geometric shapes. Remember from Affiliate 2 the pages of lawmaking it took to create the shape and texture map information for a unproblematic cube using WebGL buffers, and then it required yet more code at drawing time in lodge for WebGL to motility that data into its memory and actually draw with it. Three.js saves as all this grief by providing several prepare-made geometry objects, including prebuilt shapes like cubes and cylinders, path-drawn shapes, extruded second geometry, and a user-extensible base class then that nosotros can create our own. Allow'southward explore these now.
Prebuilt Geometry Types
Three.js comes with many prebuilt geometry types that correspond mutual shapes. This includes simple solids such equally cubes, spheres, and cylinders; more complex parametric shapes similar extrusions and path-based shapes, toruses, and knots; flat 2nd shapes rendered in 3D space, such equally circles, squares, and rings; and even 3D extruded text generated from text strings. Iii.js also supports cartoon 3D points and lines. You can easily create about of these objects using a one-line constructor, though some require slightly more complex parameters and a piddling more lawmaking.
To meet Three.js prebuilt geometry in action, run the sample located in the Three.js project at examples/webgl_geometries.html, depicted in Figure iv-ane. Each mesh object contains a different geometry type, with a reference texture map displaying how texture coordinates are generated for each. The texture comes courtesy of PixelCG Tips and Tricks, a nifty computer graphics how-to site. The scene is lit with a directional lite to bear witness the shading for each object.
Figure 4-1. Three.js congenital-in geometry demo. Pictured left to right and front to dorsum: sphere, icosahedron, octahedron, tetrahedron; plane, cube, circumvolve, band, cylinder; lathe, torus, and torus knot; line cartoon of 10, y, z axes and upwardly orientation vector
Paths, Shapes, and Extrusions
The Three.js Path
, Shape
, and ExtrudeGeometry
classes provide many flexible ways to generate geometry—for case, to create extruded objects from curves. Figure 4-two shows an extrusion generated from a spline-based curve. To see it in action, run the sample under the Three.js project at examples/webgl_geometry_extrude_shapes.html. Another sample, examples/webgl_geometry_extrude_splines.html, allows you to interactively select from a variety of spline generation algorithms and even follow the spline curve using an animated camera. Combining splines with extrusions is a great technique for generating organic-looking shapes. Spline curves are described in particular in Chapter five.
Figure four-2. Spline-based extrusions in Three.js
The Shape
classes can also be used to create flat 2nd shapes or 3D extrusions of those shapes. Permit'south say y'all have an existing library of 2nd polygon data (for example, geopolitical boundaries or vector clip art). You can fairly easily import that data into Three.js by using the Path
class, which includes path-generation methods, such as moveTo()
and lineTo()
, that should be familiar to people with 2D drawing experience. (Essentially this is a 2d drawing API embedded in a 3D drawing library.) Why practice this? Well, one time you have your 2D shape, you can use information technology to create a flat mesh that lives in 3D space: it can be transformed like any other 3D object (translated, rotated, scaled); it can exist painted with materials and lit and shaded like anything else in your scene. Yous can also extrude it to create a true 3D shape based on the second outline.
The demo in the file examples/webgl_geometry_shapes.html, depicted in Figure four-3, shows an splendid case of this adequacy. Nosotros tin encounter the outline of the land of California, some simple polygons, and whimsical hearts and smiley faces rendered in several forms, including flat second meshes, extruded and askew 3D meshes, and lines—all derived from path-based data.
Figure iv-3. Path-based extruded shapes in Three.js
The Geometry Base Course
The Three.js prebuilt geometry types are derived from the base class Three.Geometry
(src/core/Geometry.js). You can also use this class past itself to programmatically generate your ain geometry. Have a look at the source code for the prebuilt types, located in the Iii.js project under the folder src/extras/geometries/, to get a feel for how those classes implement geometry generation. To illustrate, let'due south take a quick look at one of the simpler objects, Iii.CircleGeometry
. Case iv-1 lists the code for this object, in its entirety, which fits on a single page.
Instance four-1. Three.js circle geometry code
/** * @author hughes */ Iii.CircleGeometry = function ( radius, segments, thetaStart, thetaLength ) { THREE.Geometry.call( this ); radius = radius || 50; thetaStart = thetaStart !== undefined ? thetaStart : 0; thetaLength = thetaLength !== undefined ? thetaLength : Math.PI * 2; segments = segments !== undefined ? Math.max( 3, segments ) : 8; var i, uvs = [], center = new 3.Vector3(), centerUV = new Iii.Vector2( 0.5, 0.5 ); this.vertices.button(center); uvs.push( centerUV ); for ( i = 0; i <= segments; i ++ ) { var vertex = new 3.Vector3(); var segment = thetaStart + i / segments * thetaLength; vertex.x = radius * Math.cos( segment ); vertex.y = radius * Math.sin( segment ); this.vertices.push( vertex ); uvs.push( new Three.Vector2( ( vertex.x / radius + 1 ) / 2, ( vertex.y / radius + 1 ) / 2 ) ); } var north = new Iii.Vector3( 0, 0, 1 ); for ( i = ane; i <= segments; i ++ ) { var v1 = i; var v2 = i + i ; var v3 = 0; this.faces.push( new THREE.Face3( v1, v2, v3, [ n, n, n ] ) ); this.faceVertexUvs[ 0 ].push( [ uvs[ i ], uvs[ i + ane ], centerUV ] ); } this.computeCentroids(); this.computeFaceNormals(); this.boundingSphere = new 3.Sphere( new 3.Vector3(), radius ); }; Iii.CircleGeometry.prototype = Object.create( THREE.Geometry.prototype );
The constructor for THREE.CircleGeometry
generates a flat, circular shape in the XY plane; that is, all z values are set up to zero. At the heart of this algorithm is the lawmaking to generate the vertex data for such a shape, located within the offset for
loop:
vertex.x = radius * Math.cos( segment ); vertex.y = radius * Math.sin( segment );
In reality, the 3D circle is just a fan of triangles radiating from the center. By supplying plenty triangles, we can create the illusion of a smooth edge around the perimeter. See Effigy 4-4.
Figure 4-4. Triangles making up THREE.CircleGeometry
The first loop just took care of computing the x and y vertex positions for the circumference of the circumvolve. Now we have to create a face (polygonal shape) to stand for each triangle, constructed of 3 vertices: the center
, located at the origin; and 2 boosted vertices positioned at the perimeter. The 2nd for
loop does that by creating and populating the assortment this.faces
. Each face contains the indices for three vertices from the assortment this.vertices
, indexed past indices v1
, v2
, and v3
. Note that v3
is e'er equal to zero; that vertex corresponds to the origin. (You may remember the WebGL details from Chapter ii, where gl.drawElements()
is used to render triangles using an indexed array. The same matter is going on here, existence handled under the covers by Three.js.)
We glossed over 1 item in each of the loops: texture coordinate generation. WebGL doesn't know how to map the pixels of a texture map onto the triangles information technology draws without usa telling it how. In a similar way to how we created the vertex values, the two for
loops generate texture coordinates, also known equally UV coordinates , and store them in this.faceVertexUVs
.
Call up that texture coordinates are floating-point pairs defined for each vertex, with values typically ranging from 0 to 1. These values represent x , y offsets into the bitmap epitome data; the shader will employ these values to get pixel information from the bitmap. We calculate the texture coordinate for the first 2 vertices in each triangle in a similar way to the vertex data, by using the cosine of the angle for the 10 value and the sin for the y value, but generating values in the range [0..one]
past dividing the vertex values by the radius of the circle. The texture coordinate for the 3rd vertex of each triangle, respective to the vertex at the origin, is simply the 2D eye of the prototype (0.5, 0.5).
Note
Why UV ? The messages U and V are used to denote the horizontal and vertical axes of a 2D texture map because X , Y , and Z are already used to denote the 3D axes of the object's coordinate system. For a complete exploration of the topic of UV coordinates and UV mapping, you can refer to the Wikipedia entry.
In one case the vertex and UV data has been generated, Iii.js has all it needs to render the geometry. The final lines of lawmaking in the THREE.Circle
constructor are substantially doing bookkeeping, using helper functions supplied by the base of operations geometry class. computeCentroids()
determines the geometric center of the object by looping through all its vertices, averaging positions.
computeFaceNormals()
is very important, because the object's normal vectors, or normals , determine how it is shaded. For a flat circumvolve, the normals for each face up are perpendicular to the geometry. computeFaceNormals()
easily determines this by computing a vector perpendicular to the plane defined past the three vector positions making up each triangle of the circle. The face normal for a apartment-shaded triangle is depicted in Effigy 4-v.
Figure iv-five. Face normal for a flat-shaded triangle
Finally, the constructor initializes a bounding volume for the object, in this case a sphere, which is useful for picking, culling, and performing a number of optimizations.
BufferGeometry for Optimized Mesh Rendering
Three.js recently introduced an optimized version of geometry chosen THREE.BufferGeometry
. Iii.BufferGeometry
stores its data every bit typed arrays, avoiding the extra overhead of dealing with arrays of JavaScript numbers. This class is also handy for static geometry, such as scene backgrounds and props, where you know the vertex values never modify and the objects are never animated to move effectually the scene. If you know that to be true, you tin create a 3.BufferGeometry
object, and Three.js will do a series of optimizations that render these objects really fast.
Importing Meshes from Modeling Packages
And then far nosotros have looked at creating geometry in lawmaking. But many, if not most, applications volition not be creating geometry programmatically; instead, they will be loading 3D models created by professional modeling packages such as 3ds Max, Maya, and Blender.
Three.js has several utilities to convert and/or load model files. Let'south wait at 1 example of loading a mesh, including its geometry and materials. Run the file examples/webgl_loader_obj_mtl.html nether the Three.js project. You will come across the model shown in Effigy 4-6.
The male person figure depicted hither was imported via the Wavefront OBJ format (.OBJ file extension). This is a pop text-based format exported by many modeling packages. OBJ files are uncomplicated and limited, containing only geometry information: vertices, normals, and texture coordinates. Wavefront developed a companion file format for materials, MTL, which can be used to associate materials with the objects in the OBJ file.
The source lawmaking for the 3.js OBJ format loader (with materials) is located in examples/js/loaders/OBJMTLLoader.js. Take a look at how it works and y'all volition come across that, as with the prebuilt geometry and shape classes, Iii.js file loaders create THREE.Geometry
objects to represent the geometry. The MTL parser translates text options in the MTL file into materials Three.js understands. The two are then combined into a THREE.Mesh
object suitable for adding to the scene.
Iii.js has sample loaders for many different file formats. While most formats include support for defining objects with geometry and materials, many get beyond that, representing entire scenes, cameras, lights, and animations. We volition embrace those formats (and the tools to writer them) in item in Chapter viii, which is devoted to the content creation pipeline.
Effigy 4-six. Mesh loaded from a file in Wavefront OBJ format
Notation
Most of the file loading code that comes with Three.js is not in the cadre library, but rather included with the examples. Yous volition have to include them separately in your projects. Unless otherwise indicated, these file loader utilities are covered under the same licensing as the library and you tin can feel free to use them in your work.
The Scene Graph and Transform Hierarchy
WebGL has no built-in notion of 3D scene structure; it is simply an API for cartoon to the canvas. It is up to the application to provide scene structure. Three.js defines a model for structuring scenes based on the well-established concept of a scene graph . A scene graph is a set of 3D objects stored in a hierarchical parent/child relationship, with the base of the scene graph oftentimes referred to as the root . The application renders the scene graph by rendering the root and and so, recursively, its descendants.
Using Scene Graphs to Manage Scene Complexity
Scene graphs are particularly useful for representing complex objects in a bureaucracy. Think of a robot, a vehicle, or a solar system: each of these has several individual parts— limbs, wheels, satellites—with their own behaviors. The scene graph allows these objects to be treated as either private parts or equally entire groups, as needed. This is not but for organizational convenience: it tin likewise provide a very of import adequacy known every bit transform hierarchy , where an object'due south descendants inherit its 3D transformation information (translation, rotation, calibration). For example, say you are animative a car driving along a path. The auto torso moves forth the path, merely the wheels also rotate independently. Past making the wheels children of the automobile body, your code tin can dynamically motility the automobile forth the path, and the wheels volition move through 3D infinite with it; there is no need to separately animate the move of the wheels, but their rotation.
Note
The use of the word graph in the 3.js scene graph is somewhat loose technically. In 3D rendering, the scene graph commonly refers to a directed acyclic graph (DAG), which is a mathematical term that denotes a set of nodes in a parent/child relationship in which any object can have multiple parents. In the Three.js scene graph, objects can have only one parent. While it is technically correct to telephone call the 3.js bureaucracy a graph, it would more precisely be chosen a tree . For more than information on graphs in mathematics, refer to the Wikipedia entry.
Scene Graphs in 3.js
The foundation object of the 3.js scene graph is Three.Object3D
(see src/cadre/Object3D.js under the Three.js project sources). It is used both as the base form for visual types such as meshes, lines, and particle systems, every bit well as on its ain to grouping other objects into a scene graph bureaucracy.
Each Object3D
carries its ain transform data, represented in the properties position
(translation), rotation
, and scale
. By setting these, y'all can motion, rotate, and scale the object. If the object has descendants (children and their children), those will inherit these transformations. If those descendants' transform properties take been changed, those changes will combine with those of the ancestors all the style downward the bureaucracy. Let's look at an example. The folio depicted in Effigy 4-7 shows a very simple transform hierarchy. cube
is a directly descendant of cubeGroup
; sphereGroup
is also a direct descendant of cubeGroup
(and therefore a sibling of cube
); and sphere
and cone
are descendants of sphereGroup
.
Run this sample past loading the case file Chapter 4/threejsscene.html. Y'all will run across the cube, sphere, and cone each rotating in place. Yous can interact with this scene: clicking and dragging the mouse in the content expanse rotates the entire scene; dragging the slider below the content area scales the scene.
Effigy 4-7. 3.js scene graph and transform hierarchy
Example four-2 shows the relevant lawmaking for creating and manipulating the scene graph with transform hierarchy. The actually important lines are highlighted in bold. Offset, to construct the scene: we create a new Object3D
, cubeGroup
, that will deed as the root of the scene graph. We then add the cube mesh direct to it, likewise as another Object3D
: sphereGroup
. The sphere and cone are added to sphereGroup
. Nosotros besides motion the cone a fleck up and away from the sphere by setting its position
holding.
At present for the animations: we see in function breathing()
that when sphereGroup
rotates, the sphere rotates, and the cone seems to orbit around the sphere and traverse through space. Note that we did not write any lawmaking to individually rotate the sphere mesh or motility the cone through space every animation frame; because those objects inherit their transform data from sphereGroup
, those operations are taken care of for us automatically. In a like way, interacting with the scene to rotate and calibration information technology is trivially simple: we merely set the rotation
and scale
properties, respectively, of cubeGroup
, and these changes are propagated to its descendants automatically by Three.js.
Case 4-2. A scene with transform bureaucracy
function animate() { var now = Date.now(); var deltat = now - currentTime; currentTime = at present; var fract = deltat / duration; var angle = Math.PI * 2 * fract; // Rotate the cube almost its Y axis cube.rotation.y += bending; // Rotate the sphere grouping nigh its Y centrality sphereGroup.rotation.y -= bending / 2; // Rotate the cone most its Ten axis (tumble forward) cone.rotation.ten += angle; } function createScene(canvas) { // Create the Three.js renderer and attach it to our canvas renderer = new Iii.WebGLRenderer( { canvas: canvas, antialias: true } ); // Gear up the viewport size renderer.setSize(canvas.width, canvass.elevation); // Create a new Three.js scene scene = new THREE.Scene(); // Add a photographic camera so we can view the scene photographic camera = new THREE.PerspectiveCamera( 45, canvas.width / sheet.height, i, 4000 ); photographic camera.position.z = 10; scene.add together(camera); // Create a group to hold all the objects cubeGroup = new Three.Object3D; // Add a directional light to bear witness off the objects var lite = new THREE.DirectionalLight( 0xffffff, ane.five); // Position the light out from the scene, pointing at the origin calorie-free.position.ready(.5, .2, one); cubeGroup.add(light); // Create a textured phong material for the cube // Kickoff, create the texture map var mapUrl = "../images/ash_uvgrid01.jpg"; var map = 3.ImageUtils.loadTexture(mapUrl); var material = new Iii.MeshPhongMaterial({ map: map }); // Create the cube geometry var geometry = new Three.CubeGeometry(two, ii, two); // And put the geometry and material together into a mesh cube = new THREE.Mesh(geometry, textile); // Tilt the mesh toward the viewer cube.rotation.x = Math.PI / 5; cube.rotation.y = Math.PI / 5; // Add the cube mesh to our group cubeGroup.add( cube ); // Create a grouping for the sphere sphereGroup = new THREE.Object3D; cubeGroup.add(sphereGroup); // Move the sphere group upwards and dorsum from the cube sphereGroup.position.prepare(0, 3, −four); // Create the sphere geometry geometry = new 3.SphereGeometry(1, 20, 20); // And put the geometry and material together into a mesh sphere = new THREE.Mesh(geometry, material); // Add the sphere mesh to our group sphereGroup.add( sphere ); // Create the cone geometry geometry = new 3.CylinderGeometry(0, .333, .444, twenty, 5); // And put the geometry and material together into a mesh cone = new Iii.Mesh(geometry, material); // Motion the cone up and out from the sphere cone.position.set(1, one, -.667); // Add the cone mesh to our group sphereGroup.add together( cone ); // Now add the group to our scene scene.add together( cubeGroup ); } office rotateScene(deltax) { cubeGroup.rotation.y += deltax / 100; $("#rotation").html("rotation: 0," + cubeGroup.rotation.y.toFixed(2) + ",0"); } function scaleScene(scale) { cubeGroup.scale.set(calibration, scale, scale); $("#scale").html("scale: " + scale); }
Representing Translation, Rotation, and Scale
In Three.js, transformations are done via 3D matrix math, so non surprisingly, the components of Object3D
's transform are 3D vectors: position
, rotation
, and calibration
. position
should be fairly self-explanatory: its x , y , and z components define a vector beginning from the object'southward origin. scale
is also straightforward: x , y , and z values are used to multiply the transformation matrix'southward calibration by that amount in each of the 3 dimensions.
The components of rotation
require a little more explanation: each of x , y , and z defines a rotation around that axis; for example, a value of (0, Math.PI / two, 0)
is equivalent to a 90-degree rotation around the object's y-axis. (Note that degrees are specified in radians, where two * pi radians is equivalent to 360 degrees). This blazon of rotation—a combination of angles nearly the ten, y, and z-axes—is known every bit a Euler bending . I presume Mr.doob chose Eulers as the base of operations representation because they are so intuitive and easy to work with; however, they are not without their mathematical issues in practice. For that reason, Three.js also allows you to use quaternions , another grade of specifying angles that is gratuitous from Euler bug, simply requires more programming piece of work. Quaternions are accurate, but not intuitive to work with.
Under the hood, Iii.js is using the transform properties of each Object3D
to construct a matrix. Objects that have multiple ancestors accept their matrices multiplied by those of their ancestors in recursive way; that is, Three.js traverses all the fashion downwardly to each leaf in its scene graph tree to calculate the transform matrix for each object every fourth dimension the scene is rendered. This tin get expensive for deep and circuitous scene graphs. Iii.js defines a matrixAutoUpdate
property for Object3D
, which can be set to faux
to avoid this functioning overhead. Even so, this feature has the potential to cause subtle bugs ("Why isn't my animation updating?"), so information technology should exist used with keen care.
Materials
The visual shapes we run across in WebGL applications have surface properties such as color, shading, and textures (bitmaps). Creating those backdrop using the low-level WebGL API entails writing GLSL shader code, which requires avant-garde programming skills, even for the simplest visual effects. Lucky for u.s.a., 3.js comes with ready-to-go GLSL code, packaged into objects called materials .
Standard Mesh Materials
Recall that WebGL requires the developer to supply a programmable shader in order to draw each object. You lot may have noticed the absence of GLSL shader source code thus far in this affiliate. That is for a very good reason: Three.js does the shader coding for u.s., with a library of predefined GLSL code suitable for a variety of uses out of the box.
Traditional scene graph libraries and popular modeling packages typically represent shaders via the concept of materials . A material is an object that defines the surface properties of a 3D mesh, bespeak, or line archaic, including colour, transparency, and shininess. Materials may or may not as well include texture maps—that is, bitmaps wrapped onto the surface of the object. Material properties combine with the vertex data of the mesh, lighting data in the scene, and potentially the camera position and other global backdrop to make up one's mind the concluding rendered appearance of each object.
Three.js supports mutual material types in the prebuilt classes MeshBasicMaterial
, MeshPhongMaterial
, and MeshLambertMaterial
. (The Mesh
prefix denotes that these fabric types should exist used in combination with the mesh object, equally opposed to lines or particles; there are additional material types suitable for use with other object types. See the Three.js objects that live in the project source under src/materials for a complete and up-to-appointment set up.) These material types implement, respectively, three well-known cloth techniques:
- Unlit (also known equally prelit)
-
With this material type, only the textures, colors, and transparency values are used to render the surface of the object. At that place is no contribution from lights in the scene. This is a great material blazon to utilise for flat-looking renderings and/or for cartoon unproblematic geometric objects with no shading. It is also valuable if the lighting for objects has been precomputed into the textures prior to runtime (for instance, by a 3D modeling tool with a calorie-free "baking" utility), and thus does not take to be computed by the renderer.
- Phong shading
-
This material type implements a elementary, fairly realistic-looking shading model with loftier performance. It has become the get-to textile type for achieving a classic shaded await quickly and hands and is even so used in many games and applications. Phong-shaded objects will show brightly lit areas ( specular reflections ) where lite hits straight, will low-cal well along any edges that mostly face the light source, and will darkly shade areas where the edge of the object faces abroad from the lite source.
- Lambertian reflectance
-
In Lambert shading, the credible brightness of the surface to an observer is the same regardless of the observer's angle of view. This works really well for clouds, which broadly lengthened the lite that strikes them, or satellites such as moons that take high albedo (reflect light brightly off the surface).
To become a feel for the Three.js material types, open the lab in the book example code, located in the file Chapter iv/threejsmaterials.html. The page, shown in Figure 4-8, displays a brightly lit sphere with a texture map of the moon. The moon is a good object to use here to illustrate differences between the various textile types. Use the radio buttons to switch between Phong and Lambert, for example, to meet how much more appropriate Lambert shading looks than Phong for this object. Now use the Basic (unlit) shader to see how the sphere appears rendered with just the texture and no lighting applied.
Try irresolute the diffuse and specular colors to encounter those effects. The material's diffuse color specifies how much the object reflects lighting sources that cast rays in a management—that is, directional, point, and spotlights (run across the word on lighting subsequently in this chapter). The specular color combines with scene lights to create reflected highlights from any of the object'due south vertices facing toward light sources. (Annotation that specular highlights will be visible only when the Phong material is used; the other material types do not support specular colour.) Also, try turning the texture map off with the checkbox so that you lot can run into the effects of the material on uncomplicated sphere geometry. Finally, check the wireframe box to encounter how various changes affect the wireframe rendering.
Figure 4-8. Three.js standard mesh material types: Basic (Unlit), Phong, and Lambert
Calculation Realism with Multiple Textures
The previous example shows how a texture map can be used to define the surface wait for an object. Well-nigh Three.js fabric types really support applying multiple textures to the object to create more than realistic effects. The idea behind using multiple textures in a unmarried material, or multitexturing , is to provide a computationally cheap way to add realism—versus using more than polygons or rendering the object with multiple render passes. Here are a few examples to illustrate the more than common multitexturing techniques supported in Three.js.
Bump maps
A bump map is a bitmap used to displace the surface normal vectors of a mesh to, every bit the name suggests, create an apparently bumpy surface. The pixel values of the bitmap are treated as heights rather than color values. For example, a pixel value of zero tin can mean no displacement from the surface, and nonzero values can mean positive deportation away from the surface. Typically, single-channel black and white bitmaps are used for efficiency, though total RGB bitmaps can be used to provide greater detail, since they can store much larger values. The reason that bitmaps are used instead of 3D vectors is that they are more meaty and provide a fast mode to calculate normal displacement inside the shader code. To see bump maps in activeness, open the example Affiliate 4/threejsbumpmap.html, depicted in Effigy 4-9. Plough the primary moon texture on and off, and play with the lengthened and specular color values to meet unlike results. Yous volition probably find that, while the effect tin be really cool, it can also yield unpleasant artifacts. Still, bump maps provide a cheap way to add together realistic detail.
Figure 4-ix. Bump mapping
Bump maps are trivially easy to use in 3.js. Simply provide a valid texture in the bumpMap
property of the parameter object you pass to the THREE.MeshPhongMaterial
constructor:
material= new THREE.MeshPhongMaterial({map: map, bumpMap: bumpMap
});
Normal maps
Normal maps provide a way to get even more than surface particular than bump maps, still without using extra polygons. Normal maps tend to be larger and require more processing power than bump maps, but the actress detail can be worth it. Normal maps piece of work by encoding actual vertex normal vector values into bitmaps as RGB data, typically at a much higher resolution than the associated mesh vertex data. The shader incorporates the normal information into its lighting calculations (along with electric current camera and light source values) to provide credible surface item. Open the example Chapter four/threejsnormalmap.html file to see the result of a normal map. The normal map is depicted in the swatch on the bottom right (see Figure iv-10). Note the outlines of the Globe's elevation features. Now toggle the normal map on and off to meet how much detail information technology is providing; it is quite astonishing how much detail a bitmap can add to a elementary object similar a sphere.
Figure 4-10. Normal-mapped Earth
Normal maps are also piece of cake to use in Iii.js. Simply provide a valid texture in the normalMap
belongings of the parameter object you pass to the THREE.MeshPhongMaterial
constructor:
Material = new Iii.MeshPhongMaterial({ map: map, normalMap: normalMap
});
Surroundings maps
Environment maps provide another mode to apply extra textures to increase realism. Instead of adding surface detail through apparent changes to the geometry, as with crash-land maps and normal maps, environment maps simulate reflection of objects in the surrounding environment.
Open Affiliate 4/threejsenvmap.html to see a sit-in of environment mapping. Drag the mouse in the content area to rotate the scene, or use the mouse cycle to zoom in and out. Observe how the image on the surface of the sphere appears to reflect the sky background surrounding it (encounter Figure iv-xi). In fact, it does no such thing; it is but rendering pixels from the same texture that is mapped onto the within of the cube used for the scene'southward background. The trick here is that the texture being used on the sphere's cloth is a cube texture : a texture map made up of 6 individual bitmaps stitched together to form a contiguous image on the within of a cube. This detail cube texture has been created to course a sky groundwork panorama. Have a look at the individual files that make upwards this skybox in the folder images/cubemap/skybox/ to see how information technology is synthetic. This type of environs mapping is chosen cubic environment mapping , because information technology employs cube textures.
Effigy iv-xi. Cubic environment maps for realistic scene backgrounds and reflection effects
Using cube textures in Three.js is slightly more involved than using bump or normal maps. Commencement, we need to create a cube texture instead of a regular texture. We do this with the Three.js utility ImageUtils.loadTextureCube()
, passing it URLs for the six individual prototype files. Then, we gear up this equally the value of the envMap
parameter of the MeshPhongMaterial
when calling the constructor. We as well specify a reflectivity
value defining how much of the cube texture will exist "reflected" on the textile when the object is rendered. In this example, nosotros supply a value slightly higher than the default of 1, to brand sure the surround map really stands out.
var path = "../images/cubemap/skybox/"; var urls = [ path + "px.jpg", path + "nx.jpg", path + "py.jpg", path + "ny.jpg", path + "pz.jpg", path + "nz.jpg" ]; envMap = Three.ImageUtils.loadTextureCube( urls ); materials["phong-envmapped"] = new THREE.MeshBasicMaterial( { color: 0xffffff, envMap : envMap, reflectivity:1.iii} );
At that place is more to exist done. In order for this to be a realistic effect, the reflected bitmap needs to correspond to the surrounding surroundings. To make that happen, nosotros create a skybox —that is, a large background cube textured from the inside with the same bitmap images representing a panoramic heaven. This in itself could be a lot of work just, thankfully, Three.js has a congenital-in helper that does it for u.s.a.. In addition to its prebuilt standard materials Bones, Phong, and Lambert, Three.js includes a library of utility shaders, contained in the global THREE.ShaderLib
. We only create a mesh with cube geometry, and as the textile we employ the Three.js "cube" shader defined in the library. Information technology takes care of rendering the inside of the cube using the same texture as we used for the environment map.
// Create the skybox var shader = THREE.ShaderLib[ "cube" ]; shader.uniforms[ "tCube" ].value = envMap; var material = new THREE.ShaderMaterial( { fragmentShader: shader.fragmentShader, vertexShader: shader.vertexShader, uniforms: shader.uniforms, side: THREE.BackSide } ), mesh = new THREE.Mesh(new Iii.CubeGeometry( 500, 500, 500 ), material); scene.add( mesh );
Lights
Lights illuminate objects in the 3D scene. Three.js defines several built-in light classes that correspond to those typically found in modeling tools and other scene graph libraries. The almost normally used light types are directional lights , point lights , spotlights , and ambience lights .
- Directional lights
-
Represent a light source that casts parallel rays in a detail direction. They accept no position, only a direction, color, and intensity. (In fact, in Three.js, directional lights practice accept a position, merely it is used only to summate the low-cal's management based on the position and a 2nd vector, the target position. This is a clumsy and counterintuitive syntax that I hope Mr.doob anytime fixes.)
- Indicate lights
-
Accept a position but no direction; they cast their light in all directions from their position, over a given distance.
- Spotlights
-
Have a position and a direction. They also have parameters defining the size (bending) of the spotlight's inner and outer cones, and a altitude over which they illuminate.
- Ambient lights
-
Have no position or direction. They illuminate a scene every bit throughout.
All Iii.js light types support the mutual backdrop intensity
, which defines the light's strength, and color
, an RGB value.
Lights do not practice their job on their own; their values combine with sure backdrop of materials to define an object's ultimate surface appearance. MeshPhongMaterial
and MeshLambertMaterial
define the following backdrop:
-
color
-
Also known as the diffuse color, this specifies how much the object reflects lighting sources that bandage rays in a management (i.e., directional, betoken, and spotlights).
-
ambient
-
The amount of ambient scene lighting reflected by the object.
-
emissive
-
This material property defines the color an object emits on its own, irrespective of calorie-free sources in the scene.
MeshPhongMaterial
also supports a specular
colour, which combines with scene lights to create reflected highlights from the object's vertices that are facing toward light sources.
Call back that MeshBasicMaterial
ignores lights completely.
Figure 4-12 depicts a lighting experiment built with the basic Three.js low-cal types. Open the file Chapter 4/threejslights.html to run it. The scene contains four lights, one of each type, and displays a simple blackness-and-white textured ground airplane and three plain white geometry objects to illustrate the furnishings of the diverse lights. The color picker controls on the page allow y'all to interactively change the color of each light. Set a lite's colour to blackness, and information technology will turn the light off completely. Drag the mouse inside the content area to rotate around the scene and encounter the furnishings of the lights on diverse parts of the model.
Figure 4-12. Directional, point, spot, and ambience lights
The following code listing shows the light setup code. The white directional calorie-free positioned in front of the scene lights brilliant white areas on the front of the geometry objects. The blue point light illuminates from behind the model; note the blue areas on the floor to the back of the object. The greenish spotlight casts its cone toward the floor most the front of the scene, as defined by spotLight.target.position
. Finally, the ambience light provides a small amount of illumination to all objects in the scene equally. Play with the controls and audit the model from all sides to see the individual and combined furnishings of the lights.
// Create and add all the lights directionalLight.position.set(.five, 0, three); root.add(directionalLight); pointLight = new THREE.PointLight (0x0000ff, 1, 20); pointLight.position.prepare(−5, 2, −ten); root.add together(pointLight); spotLight = new 3.SpotLight (0x00ff00); spotLight.position.set(2, 2, v); spotLight.target.position.set(2, 0, iv); root.add(spotLight); ambientLight = new 3.AmbientLight ( 0x888888 ); root.add together(ambientLight);
Note
At this juncture, here is a friendly reminder well-nigh what is going on. Every bit with most everything else in WebGL, lights are an artificially created construct. WebGL knows only nearly buffers and shaders; developers need to synthesize lighting effects by writing shader code. Iii.js offers an astounding set up of material and lighting capabilities…all the more than incredible when you realize that information technology was written in JavaScript. Of grade, none of this would exist possible if WebGL didn't give the states access to the GPU to create these amazing effects in the showtime place.
Shadows
For years, designers have used shadows to add an actress visual cue that enhances realism. Typically these are faked, prerendered affairs, and moving the light source or whatsoever of the shadowed objects destroys the illusion. However, Three.js allows us to render shadows in real fourth dimension based the current positions of the lights and objects.
The example in the file Chapter four/threejsshadows.html demonstrates how to add real-fourth dimension shadows to a scene. Refer to Figure 4-thirteen: the geometry casts shadows onto the basis plane based on a spotlight positioned above the ground and in forepart of the scene. Note how the shadow follows the shape of the rotating cube. Also, as the floor rotates, the shadow does not move forth with it. If the shadows were faked with prerendering, the shadow would stay "glued" to the floor and it would non rotate along with the cube. Play with the low-cal controls, in particular the spotlight, to see how the shadow changes dynamically.
Figure four-13. Using a spotlight and shadow map to cast real-time shadows
Three.js supports shadows using a technique called shadow mapping . With shadow mapping, the renderer maintains an boosted texture map, to which it renders the shadowed areas and combines with the final paradigm in its fragment shaders. So, enabling shadows in 3.js requires a few steps:
-
Enable shadow mapping in the renderer.
-
Enable shadows and set shadow parameters for the lights that cast shadows. Both the
Three.DirectionalLight
type and theIii.SpotLight
blazon back up shadows. -
Point which geometry objects cast and receive shadows.
Permit's take a await at how this is washed in lawmaking. Case 4-3 shows the code added to createScene()
to render shadows, highlighted in boldface.
Example 4-3. Shadow mapping in Iii.js
var SHADOW_MAP_WIDTH = 2048, SHADOW_MAP_HEIGHT = 2048; function createScene(canvas) { // Create the 3.js renderer and attach it to our canvass renderer = new Three.WebGLRenderer( { sheet: canvas, antialias: true } ); // Fix the viewport size renderer.setSize(canvas.width, canvas.summit); // Turn on shadows renderer.shadowMapEnabled = true; renderer.shadowMapType = 3.PCFSoftShadowMap; // Create a new Three.js scene scene = new THREE.Scene(); // Add a camera and then nosotros can view the scene camera = new Iii.PerspectiveCamera( 45, canvas.width / canvas.height, one, 4000 ); camera.position.prepare(-2, 6, 12); scene.add(photographic camera); // Create a grouping to agree all the objects root = new THREE.Object3D; // Add a directional light to prove off the object directionalLight = new THREE.DirectionalLight( 0xffffff, 1); // Create and add all the lights directionalLight.position.set(.v, 0, three); root.add(directionalLight); spotLight = new 3.SpotLight (0xffffff); spotLight.position.set(two, 8, 15); spotLight.target.position.set(−two, 0, −2); root.add(spotLight); spotLight.castShadow = truthful; spotLight.shadowCameraNear = i; spotLight.shadowCameraFar = 200; spotLight.shadowCameraFov = 45; spotLight.shadowDarkness = 0.5; spotLight.shadowMapWidth = SHADOW_MAP_WIDTH; spotLight.shadowMapHeight = SHADOW_MAP_HEIGHT; ambientLight = new 3.AmbientLight ( 0x888888 ); root.add together(ambientLight); // Create a grouping to hold the spheres grouping = new Iii.Object3D; root.add(grouping); // Create a texture map var map = THREE.ImageUtils.loadTexture(mapUrl); map.wrapS = map.wrapT = THREE.RepeatWrapping; map.echo.set(eight, viii); var color = 0xffffff; var ambient = 0x888888; // Put in a ground plane to show off the lighting geometry = new Iii.PlaneGeometry(200, 200, 50, l); var mesh = new Three.Mesh(geometry, new Iii.MeshPhongMaterial({color:colour, ambient:ambient, map:map, side:THREE.DoubleSide})); mesh.rotation.ten = -Math.PI / 2; mesh.position.y = −4.02; // Add the mesh to our group group.add( mesh ); mesh.castShadow = imitation; mesh.receiveShadow = true; // Create the cube geometry geometry = new 3.CubeGeometry(ii, ii, 2); // And put the geometry and cloth together into a mesh mesh = new 3.Mesh(geometry, new THREE.MeshPhongMaterial({color:color, ambient:ambient})); mesh.position.y = three; mesh.castShadow = truthful; mesh.receiveShadow = fake; // Add the mesh to our grouping grouping.add( mesh ); // Save this 1 away so we can rotate it cube = mesh; // Create the sphere geometry geometry = new 3.SphereGeometry(Math.sqrt(2), 50, 50); // And put the geometry and material together into a mesh mesh = new THREE.Mesh(geometry, new Three.MeshPhongMaterial({color:color, ambient:ambient})); mesh.position.y = 0; mesh.castShadow = true; mesh.receiveShadow = false; // Add the mesh to our group grouping.add( mesh ); // Create the cylinder geometry geometry = new Three.CylinderGeometry(1, 2, 2, fifty, 10); // And put the geometry and material together into a mesh mesh = new Three.Mesh(geometry, new THREE.MeshPhongMaterial({color:color, ambient:ambience})); mesh.position.y = −three; mesh.castShadow = truthful; mesh.receiveShadow = false; // Add together the mesh to our group group.add together( mesh ); // Now add the grouping to our scene scene.add together( root ); }
Offset, we enable shadows in the renderer by setting renderer.shadowMapEnabled
to true
and setting its shadowMapType
holding to THREE.PCFSoftShadowMap
. Three.js supports three unlike types of shadow mapping algorithms: basic, PCF (for "percentage close filtering"), and PCF soft shadows. Each algorithm provides increasing realism, at the expense of higher complication and slower functioning. Attempt experimenting with this sample by changing the shadowMapType
to THREE.BasicShadowMap
and Iii.PCFShadowMap
and accept a look at the results; shadow quality degrades noticeably with the lower-quality settings. Simply y'all may demand to go that route for functioning if your scenes are circuitous.
Adjacent, we need to enable shadow casting for the spotlight. We set its castShadow
belongings to true
. We too set up several parameters required by Three.js. Three.js renders shadows by casting a ray from the position of the light toward its target object. Essentially, it treats the spotlight as another "camera" for rendering the scene from the position. So we must set up camera-like parameters, including near and far clipping planes and field of view. The well-nigh and far values are very much dependent on the size of the scene and objects, then we chose fairly small values for both. The field of view was determined empirically. We also provide a darkness value for the shadow; the Three.js default of 0.5 is suitable for this awarding. Then, nosotros prepare backdrop that determine the size of the Three.js-generated shadow map. The shadow map is an additional bitmap created by 3.js into which it will render the shadow night areas and ultimately blend with the terminal rendered image of each object. Our values for SHADOW_MAP_WIDTH
and SHADOW_
MAP_HEIGHT
are ii,048, which is much college than the Iii.js default of 512. This produces very smooth shadows; lower values will yield more jagged results. Experiment with this value in the example to encounter how lower-resolution shadow maps touch shadow quality.
Finally, we must tell Iii.js which objects cast and receive shadows. Past default, Three.js meshes exercise non cast or receive shadows, and then we must set this explicitly. In this example, we want the solid geometries to cast shadows onto the floor, and the floor to receive the shadows. And so, for the floor we set mesh.castShadow
to false
and mesh.receiveShadow
to true
; for the cube, sphere, and cone we fix mesh.castShadow
to true
and mesh.receiveShadow
to false
.
As a finishing touch, we would like the intensity of the shadow to correspond to the brightness of the spotlight casting it. However, Iii.js shadow mapping does not automatically take into account the brightness of the light sources when rendering shadows. Rather, it uses the light's shadowDarkness
property. So, as the color of the spotlight is updated via the user interface, we need to update shadowDarkness
ourselves. The following fragment shows the code for the helper role setShadowDarkness()
, which calculates a new value for the shadow darkness based on the average brightness of the low-cal color's cherry, greenish, and blue components. Equally you alter the spotlight'south color in the demo to a darker value, you volition run into the shadow fade away.
function setShadowDarkness(light, r, g, b) { r /= 255; g /= 255; b /= 255; var avg = (r + g + b) / three; lite.shadowDarkness = avg * 0.v; }
Note
Existent-time shadows are a fantastic enhancement to the WebGL visual experience, and Three.js makes them fairly easy to work with. Still, they come at a cost. First, the shadow map, which is just another texture map, requires additional graphics memory; for a 2,048 × 2,048 map, that amounts to an boosted 4 MB. Meet if y'all can get away with smaller shadow map sizes and still get the desired visual consequence. Also, depending on the graphics hardware being used, rendering off-screen to the shadow map tin can innovate extra processing overhead that slows down frame charge per unit considerably. So, you lot must accept intendance when using this characteristic. Exist ready to profile and, potentially, autumn back to another solution that doesn't crave real-fourth dimension shadows.
Shaders
Three.js provides a powerful set of materials out of the box, implemented via predefined GLSL shaders included with the library. These shaders were adult to support commonly used shading styles, such as unlit, Phong, and Lambert. But there are many other possibilities. In the general instance, materials can implement a limitless diverseness of effects, can utilize many and variegate properties, and can get arbitrarily circuitous. For example, a shader simulating grass blowing in the wind might have parameters that determine the height and thickness of the grass and the current of air speed and direction.
As computer graphics evolved, and production values rose over the concluding two decades—originally for film special effects and later on for existent-fourth dimension video games—shading started looking more than like a full general-purpose programming problem than an art production exercise. Instead of trying to predict every potential combination of material properties and code them into a runtime engine, the industry banded together to create programmable pipeline technology, known every bit programmable shaders , or just shaders . Shaders allow developers to write lawmaking that implements circuitous effects on a per-vertex and per-pixel footing in a C-fashion language compiled for execution on the GPU. Using programmable shaders, developers can create highly realistic visuals with loftier operation, freed from the constraints of predefined material and lighting models.
The ShaderMaterial Class: Coil Your Own
GL Shading Language (GLSL) is the shading language developed for utilize with Open GL and OpenGL ES (the basis for the WebGL API). GLSL source code is compiled and executed for utilise with WebGL via methods of the WebGL context object. 3.js hides GLSL under the covers for us, allowing united states to completely bypass shader programming if we so cull. For many applications, the prebuilt material types suffice. Just if our application needs a visual consequence that is not supplied out of the box, Three.js likewise allows united states to write custom GLSL shaders using the form THREE.ShaderMaterial
.
Figure 4-14 shows an example of ShaderMaterial
in action. This example, which can be found under the Iii.js project tree at examples/webgl_materials_shaders_fresnel.html, demonstrates a Fresnel shader. Fresnel shading is used to simulate the reflection and refraction of calorie-free through transparent media such as h2o and glass.
Figure 4-14. Fresnel shader provides high realism via reflection and refraction
Note
Fresnel shaders (pronounced "fre-nel") are named after the Fresnel Outcome, kickoff documented by the French physicist Augustin-Jean Fresnel (1788–1827). Fresnel advanced the wave theory of light through a study of how light was transmitted and propagated past unlike objects. For more information, consult the online 3D rendering glossary.
The setup code in this example creates a ShaderMaterial
as follows: it clones the compatible (parameter) values of the FresnelShader
template object—each example of a shader needs its ain re-create of these—and passes the GLSL source lawmaking for the vertex and fragment shaders. One time these are ready upwardly, 3.js volition automatically handle compiling and linking the shaders, and binding JavaScript properties to the uniform values.
var shader = THREE.FresnelShader; var uniforms = 3.UniformsUtils.clone( shader.uniforms ); uniforms[ "tCube" ].value = textureCube; var parameters = { fragmentShader: shader.fragmentShader, vertexShader: shader.vertexShader, uniforms: uniforms }; var cloth = new 3.ShaderMaterial( parameters );
The GLSL code for the Fresnel shader is shown in Example iv-iv. The source can too be found under the Three.js project tree in the file examples/js/shaders/FresnelShader.js. This shader code was written by frequent 3.js contributor Branislav Ulicny, better known by his "nom de code," AlteredQualia . Permit'southward walk through the listing to see how it is done.
Example 4-4. Fresnel shader for Three.js
/** * @author alteredq / http://alteredqualia.com/ * Based on Nvidia Cg tutorial */ THREE.FresnelShader = { uniforms: { "mRefractionRatio": { type: "f", value: 1.02 }, "mFresnelBias": { type: "f", value: 0.1 }, "mFresnelPower": { type: "f", value: 2.0 }, "mFresnelScale": { type: "f", value: 1.0 }, "tCube": { type: "t", value: zero } },
The uniforms
holding of THREE.ShaderMaterial
specifies the values Three.js will pass to WebGL when the shader is used. Retrieve that the shader programme is executed for each vertex and each pixel (fragment). Shader uniforms are values that, every bit the name implies, do non change from vertex to vertex; they are essentially global variables whose value is the same for all vertices and pixels. The Fresnel shader in this example defines uniforms controlling the amount of reflection and refraction (e.g., mRefractionRatio
and mFresnelScale
). It likewise defines a uniform for the cube texture used equally the scene background. In a similar fashion to the cubic environment-mapping sample we saw in a previous section, this shader simulates reflection by rendering the pixels from the cube map. All the same, with this shader, we will see not just pixels reflected from the cube map, but refracted ones likewise.
Using GLSL Shader Code with Three.js
Now it'due south time to fix the vertex and fragment shaders. First, the vertex shader:
vertexShader: [ "compatible float mRefractionRatio;", "uniform float mFresnelBias;", "compatible float mFresnelScale;", "uniform float mFresnelPower;", "varying vec3 vReflect;", "varying vec3 vRefract[3];", "varying float vReflectionFactor;", "void main() {", "vec4 mvPosition = modelViewMatrix * vec4( position, i.0 );", "vec4 worldPosition = modelMatrix * vec4( position, ane.0 );", "vec3 worldNormal = normalize( mat3( modelMatrix[0].xyz, ", " modelMatrix[1].xyz, modelMatrix[2].xyz ) * normal );", "vec3 I = worldPosition.xyz - cameraPosition;", "vReflect = reflect( I, worldNormal );", "vRefract[0] = refract( normalize( I ), worldNormal, ", " mRefractionRatio );", "vRefract[one] = refract( normalize( I ), worldNormal, ", " mRefractionRatio * 0.99 );", "vRefract[two] = refract( normalize( I ), worldNormal, ", " mRefractionRatio * 0.98 );", "vReflectionFactor = mFresnelBias + mFresnelScale * ", " pow( i.0 + dot( normalize( I ), worldNormal ), ", " mFresnelPower );", "gl_Position = projectionMatrix * mvPosition;", "}" ].join("\n"),
The vertex shader programme is the workhorse for this detail material. Information technology uses the camera position and the position of each vertex of the model—in this example, the sphere geometry used for the bubble shape—to calculate a direction vector, which is then used to compute reflection and refraction coefficients for each vertex. Notation the varying
declarations in the vertex and fragment shader programs. Different uniform variables, varying variables are computed for each vertex and are passed along from the vertex to the fragment shader. In this fashion, the vertex shader tin output values in improver to the built-in gl_Position
that is its primary job to compute. For the Fresnel shader, the varying outputs are the reflection and refraction coefficients.
The Fresnel vertex shader too makes apply of several varying and uniform variables that we exercise not encounter here because they are predefined by Iii.js, and passed to the GLSL compiler automatically: modelMatrix
, modelViewMatrix
, projectionMatrix
, and cameraPosition
. These values do not need to be—in fact, should not be—explicitly alleged by the shader programmer.
-
modelMatrix
(uniform) -
The world transformation matrix for the model (mesh). As discussed in the department The Scene Graph and Transform Bureaucracy, this matrix is computed by Three.js every frame to determine the world space position of an object. Within the shader, it is used to calculate the earth space position of each vertex.
-
modelViewMatrix
(uniform) -
The transformation representing each object'due south position in camera space—that is, in coordinates relative to the position and orientation of the camera. This is particularly handy for computing camera-relative values (e.g., to determine reflection and refraction, which is exactly what is being done in this shader).
-
projectionMatrix
(compatible) -
Used to calculate the familiar 3D-to-2nd projection from camera space into screen space.
-
cameraPosition
(uniform) -
The globe space position of the camera maintained by 3.js and passed in automatically.
-
position
(varying) -
The vertex position, in model infinite.
-
normal
(varying) -
The vertex normal, in model space.
The vertex shader also makes employ of built-in GLSL functions, reflect()
and refract()
, to compute reflection and refraction vectors based on the camera direction, normal, and refraction ratio. (These functions were congenital into the GLSL language because they are so generally useful for lighting computations similar the Fresnel equations.)
Finally, notation the utilise of Array.join()
to prepare up the vertex shader. This illustrates withal some other useful technique for putting together the long text strings that implement shaders in the GLSL language. Rather than escaping newlines at the terminate of each line of code and using string concatenation, we use join()
to insert newlines between each line of lawmaking.
From here, the fragment shader's job is straightforward. It uses the reflection and refraction values computed by the vertex shader to index into the cube texture passed in the compatible variable tCube
. This variable is of type samplerCube
, a GLSL type designed to handle cube textures. We alloy these two colors using the GLSL function mix()
, to produce the concluding pixel output past storing information technology in the built-in gl_FragColor
.
fragmentShader: [ "uniform samplerCube tCube;", "varying vec3 vReflect;", "varying vec3 vRefract[three];", "varying float vReflectionFactor;", "void main() {", "vec4 reflectedColor = textureCube( tCube, ", " vec3( -vReflect.x, vReflect.yz ) );", "vec4 refractedColor = vec4( 1.0 );", "refractedColor.r = textureCube( tCube, ", " vec3( -vRefract[0].x, vRefract[0].yz ) ).r;", "refractedColor.grand = textureCube( tCube, ", " vec3( -vRefract[1].x, vRefract[1].yz ) ).yard;", "refractedColor.b = textureCube( tCube, ", " vec3( -vRefract[2].x, vRefract[ii].yz ) ).b;", "gl_FragColor = mix( refractedColor, ", " reflectedColor, clench( vReflectionFactor, ", " 0.0, ane.0 ) );", "}" ].join("\n") };
Creating a custom shader may seem like a lot of work, but the final result is worth information technology, as it produces a very convincing simulation of real-world eyes. And the extra machinery Iii.js puts in identify for us—keeping world matrices up to date per object, tracking the camera, predeclaring dozens of GLSL variables, compiling and linking the shader code—saves us literally days of development and debugging effort and makes the thought of developing our own custom shaders not only conceivable, but inviting. With this framework in place, yous should experience free to experiment writing your own shaders. I suggest starting with the Fresnel and other shaders that come with the Three.js samples. In that location are many different kinds of effects and a lot to larn in there.
Rendering
This chapter has climbed a Three.js ladder of sorts, an ascent of increasing realism that began with the drawing of simple geometric shapes, up through materials, textures, lights, and shadows, and eventually to writing our own shaders in GLSL. We take climbed loftier, creating more realistic graphics at each pace, but we are not quite at the height. Believe it or not, at that place is one more rung: rendering.
The ultimate output of manipulating the 3.js 3D scene graph is a second image rendered onto a browser Canvas chemical element. Whether we accomplish this by using WebGL, using the 2D Canvas drawing API, or footling with CSS to move elements around on the page is almost irrelevant; the cease goal is painting pixels. We cull to utilise WebGL because it tin can get the job washed fast. Using the other technologies we might— might —be able to attain many of these visual effects, just non an acceptable frame rate. So we often cull WebGL.
This being said, even with WebGL we accept several choices virtually exactly how to accept it render images. For example, the API allows us to utilize Z-buffered rendering—where the hardware uses additional memory to paint just those pixels frontmost in the scene—or not. It'due south our choice. If we don't use Z-buffering, our application will have to sort objects itself, potentially down to the triangle level. That sounds like a big hassle, merely depending on the use example, nosotros may desire to do exactly that. This is only one such option we tin can make regarding rendering.
3.js was designed to arrive piece of cake to practise basic graphics. The built-in WebGL renderer is set to go with game-quality graphics without causing also much developer grief. As we have seen in the examples thus far, it's as piece of cake equally 1) creating the renderer, 2) setting the viewport dimensions, and three) calling render()
. Merely the library also allows us to do much more than, providing the ability to control the WebGL rendering process at a fine-grained level. When this capability is combined with advanced rendering techniques such as post-processing, multipass rendering, and deferred rendering, we tin create some truly realistic effects.
Post-Processing and Multipass Rendering
Sometimes, one render isn't enough. It often takes several renderings of a scene with different parameters to create a high-quality, realistic-looking paradigm. These separate renderings, or passes , are ultimately combined together to produce the concluding image in a process known as multipass rendering . Many multipass rendering approaches involve using post-processing , or improving an epitome's quality via image-processing techniques.
Post-processing and multipass rendering have go increasingly popular in real-time 3D rendering, so the authors of Three.js have taken great pains to support it. Figure 4-15 shows a subtle yet dramatic example of Three.js post-processing written by AlteredQualia. Load the file examples/webgl_terrain_dynamic.html. Birds flock majestically over an otherworldly mural in the foggy dawn light. Every bit if the simplex noise-based, procedurally generated terrain weren't impressive enough, this piece also features multiple render passes, including bloom shading to emphasize the bright sunlight diffusing through the morning time fog, and a Gaussian filter to softly blur the scene, further enhancing the scene'due south serene qualities.
Figure 4-15. Dynamic procedural terrain case, rendered with several mail-processing passes—programming by AlteredQualia; birds by Mirada (of RO.ME fame)
3.js post-processing relies on the post-obit features:
-
Back up for multiple return targets via the
Three.WebGLRenderTarget
object. With multiple render targets, a scene can exist rendered more than once to off-screen bitmaps and then combined afterwards in a concluding paradigm. (Source file: src/renderers/WebGLRenderTarget.js.) -
A multipass rendering loop implemented in course
Three.EffectComposer
. This object contains ane or more render pass objects that it volition call in succession to render the scene. Each pass has access to the entire scene besides as the image data produced by the previous pass, allowing it to further refine the paradigm.
THREE.EffectComposer
, and the sample multipass techniques that use it, are located in the Iii.js projection folder examples, nether examples/js/postprocessing/ and examples/js/shaders/. A scan of these folders will unearth a treasure trove of mail-processing special effects.
Deferred Rendering
We have one more than rendering approach to explore: deferred rendering . Every bit the name implies, this approach delays rendering to the WebGL canvas until a concluding image is computed from multiple sources. Unlike multipass rendering, which successively renders a scene and refines the prototype before finally copying it to the WebGL canvas, deferred rendering employs multiple buffers (actually just texture maps) into which the data required for the shading computations is gathered in an initial pass. In a subsequent pass, the pixel values are calculated with the values gathered from the first pass. This approach can exist memory- and computationally expensive, but it tin produce highly realistic effects, specially with respect to lighting and shadows. Run across Figure 4-16 for an example.
Figure 4-16. Per-pixel lighting using deferred rendering
Chapter Summary
This chapter covered wide footing, touching on almost of the graphics drawing and rendering capabilities present in Three.js. We saw how to use the prebuilt geometry classes to hands create 3D solids, meshes, and parameterized and extruded shapes. Nosotros discussed the Three.js scene graph and transform hierarchy for amalgam complex scenes. We got hands-on feel with materials, textures, and lighting. Finally, we explored how programmable shaders and advanced rendering techniques such as post-processing and deferred rendering can increase visual realism. The graphics features in Three.js represent a massive arsenal, packaged up in an attainable and like shooting fish in a barrel-to-utilise library. These facilities, combined with the raw ability of WebGL, allow us to create well-nigh any 3D visuals we can imagine.
Source: https://www.oreilly.com/library/view/programming-3d-applications/9781449363918/ch04.html
0 Response to "three js draw 2d box around 3d objects"
Post a Comment