WebGL is a low-level JavaScript API that renders triangles in a canvas at a remarkable speed. It's compatible with most modern browsers, and it's fast because it uses the Graphic Processing Unit (GPU) of the visitor.
The Instructions that WebGL sends to the GPU are called shaders.
This shaders are written in GLSL (OpenGL Shading Language). A language that is very similar to C.
Any object in a 3D virtual world is composed of triangles which in turn are composed of 3 vertices.
Imagine that you want to render a 3D object and this object is constituted of 1000 triangles. Each triangle includes 3 points.
When we want to render our object, the GPU will have to calculate the position of these 3000 points. Because the GPU can do parallel calculations, it will handle all the points points in one raw.
Once the object's points are well placed, the GPU needs to draw each visible pixel of those triangles. Yet again, the GPU will handle the thousands and thousands of pixels calculations in one go.
npm install three
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>A Brief Introduction to Three.js</title>
</head>
<body>
<canvas class="webgl"></canvas>
<script src="script.js"></script>
</body>
</html>
import * as THREE from "three";
const canvas = document.querySelector(".webgl");
/* Setup code goes here */
const tick = () => {
/* Loop code goes here */
requestAnimationFrame(tick);
};
tick();
Scenes allow you to set up what and where is to be rendered by three.js. This is where you place objects, lights and cameras.
const scene = new THREE.Scene();
The perspective camera is designed to simulate what the human eye sees. The camera perceives all objects in a perspective projection: the further the object is, the smaller it seems.
In this projection mode, the size of the object remains constant, regardless of its distance from the camera. If we move the camera, the lines and objects will not be distorted.
const camera = new THREE.PerspectiveCamera(75, 16 / 9, 0.1, 100);
camera.position.set(3, 3, 1);
camera.lookAt(0, 0, 0);
scene.add(camera);
It is a class representing triangular polygon mesh based objects.
Iss the combination of a geometry (the shape) and a material (how it looks).
A geometry is the mathematical formula of an object. It gives us the vertices of the object we want to add to the scene.
const sphereGeometry = new THREE.SphereGeometry(0.5, 384, 384);
A material can be defined as the properties of an object and its behavior with the light sources of the scene. Simply put, materials describe the appearance of objects.
const sphereMaterial = new THREE.MeshBasicMaterial({
color: 0x00caaf,
});
// Geometry
const sphereGeometry = new THREE.SphereGeometry(0.5, 384, 384);
// Material
const sphereMaterial = new THREE.MeshBasicMaterial({
color: 0x00caaf,
});
// Mesh = Geometry + Material
const sphereMesh = new THREE.Mesh(sphereGeometry, sphereMaterial);
sphereMesh.position.set(0, 1.5, 0);
scene.add(sphereMesh);
The renderer's job is to do the render. π
We will simply ask the renderer to render our scene from the camera point of view, and the result will be drawn into a canvas.
const renderer = new THREE.WebGLRenderer({
canvas: canvas
});
const tick = () => {
// render the scene
renderer.render(scene, camera);
requestAnimationFrame(tick);
};
tick();
import * as THREE from "three";
const canvas = document.querySelector(".webgl");
// Scene
const scene = new THREE.Scene();
// Camera
const camera = new THREE.PerspectiveCamera(75, 16 / 9, 0.1, 100);
camera.position.set(3, 3, 1);
camera.lookAt(0, 0, 0);
scene.add(camera);
// Geometry
const sphereGeometry = new THREE.SphereGeometry(0.5, 384, 384);
// Material
const sphereMaterial = new THREE.MeshBasicMaterial({
color: 0x00caaf,
});
// Mesh = Geometry + Material
const sphereMesh = new THREE.Mesh(sphereGeometry, sphereMaterial);
sphereMesh.position.set(0, 1.5, 0);
scene.add(sphereMesh);
// Renderer
const renderer = new THREE.WebGLRenderer({
canvas: canvas
});
// Loop Function
const tick = () => {
// render the scene
renderer.render(scene, camera);
requestAnimationFrame(tick);
};
tick();
Physically based rendering (PBR) is a computer graphics approach that seeks to render images in a way that models the flow of light in the real world. Many PBR pipelines aim to achieve photorealism. π
MeshStandardMaterial is a standard physically based material, using Metallic-Roughness workflow.
// Geometry
const sphereGeometry = new THREE.SphereGeometry(0.5, 384, 384);
// Material
const sphereMaterial = new THREE.MeshStandardMaterial({
color: 0x00caaf,
roughness: 0,
metalness: 0.75,
});
// const sphereMaterial = new THREE.MeshBasicMaterial({ color: 0x00caaf });
// Mesh = Geometry + Material
const sphereMesh = new THREE.Mesh(sphereGeometry, sphereMaterial);
const ambientLight = new THREE.AmbientLight(0xffffff, 0.5);
scene.add(ambientLight);
const pointLight = new THREE.PointLight(0xffc83f, 32);
scene.add(pointLight);
const clock = new THREE.Clock();
const tick = () => {
const elapsedTime = clock.getElapsedTime();
pointLight.position.x = Math.sin(elapsedTime) * 2;
pointLight.position.z = Math.cos(elapsedTime) * 2;
// ...
};
tick();
The core shadow is the dark band visible where light and shadow meet. It is the point at which light can no longer reach the form to illuminate it. It is the darkest area of the object.
The cast shadow is the shadow on the surface that the object rests on. It is created by the object itself blocking the light from the light source.
const renderer = new THREE.WebGLRenderer({
canvas: canvas,
});
renderer.shadowMap.enabled = true;
const pointLight = new THREE.PointLight(0xffc83f, 32);
pointLight.castShadow = true;
const sphereMesh = new THREE.Mesh(sphereGeometry, sphereMaterial);
sphereMesh.castShadow = true;
const floorMesh = new THREE.Mesh(floorGeometry, floorMaterial);
floorMesh.receiveShadow = true;
const cubeTextureLoader = new THREE.CubeTextureLoader();
const environmentMap = cubeTextureLoader.load([
"/assets/textures/environmentMap/px.jpg",
"/assets/textures/environmentMap/nx.jpg",
"/assets/textures/environmentMap/py.jpg",
"/assets/textures/environmentMap/ny.jpg",
"/assets/textures/environmentMap/pz.jpg",
"/assets/textures/environmentMap/nz.jpg",
]);
// Sets the background used when rendering the scene
scene.background = environmentMap;
// Sets the texture as the env map for all physical materials in the scene
scene.environment = environmentMap;
const textureLoader = new THREE.TextureLoader();
const color = textureLoader.load("/assets/textures/bricks/color.jpg");
const normal = textureLoader.load("/assets/textures/bricks/normal.jpg");
const roughness = textureLoader.load("/assets/textures/bricks/roughness.jpg");
const displacement = textureLoader.load("/assets/textures/bricks/displacement.jpg");
const ao = textureLoader.load("/assets/textures/bricks/ao.jpg");
const sphereMaterial = new THREE.MeshStandardMaterial({
//color: 0x00caaf,
//roughness: 0,
//metalness: 0.75,
map: color,
normalMap: normal,
roughnessMap: roughness,
displacementMap: displacement,
aoMap: ao,
});
The vertex shader's purpose is to position the vertices of the geometry. The idea is to send the vertices positions, the mesh transformations (like its position, rotation, and scale), the camera information (like its position, rotation, and field of view) and then
The GPU will follow the instructions in the vertex shader to process all of this information in order to project the vertices on a 2D space that will become our render (our canvas).
When using a vertex shader, its code will be applied on every vertex of the geometry. The vertex shader happens first. Once the vertices are placed, the GPU knows what pixels of the geometry are visible and can proceed to the fragment shader.
uniform float uTime;
uniform float uTimeVelocity;
uniform vec2 uFrequency;
uniform float uAmplitude;
uniform float uXMovement;
varying vec2 vUv;
void main() {
vUv = uv;
vec3 pos = position.xyz;
pos.z += sin(uv.x * uFreq.x - (uSpeed * uTime)) * uAmp * uv.x * uXMovement;
pos.z += sin(uv.y * uFreq.y - (uSpeed * uTime)) * uAmp * uv.x;
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(pos, 1);
}
The fragment shader purpose is to color each visible pixel (fragment) of the geometry. The same fragment shader will be used for every visible pixel (fragment) of the geometry.
varying vec2 vUv;
uniform float uTime;
#define PI 3.14159265358979
float cnoise(vec3 P) {
// Classic Perlin 3D function
}
void main() {
vec2 displacedUv = vUv + cnoise(vec3(vUv * 10.0, uTime * 0.5));
float strength = cnoise(vec3(displacedUv * 10.0, uTime * 0.5));
float red = strength * 0.75 + 0.2;
float green = 0.0;
float blue = strength * 0.2 + 0.2;
gl_FragColor = vec4(red, green, blue, 1.0);
}
Baking to texture is the process of approximating complex surface effects as simple 2D bitmaps and then assigning them to objects. By creating a library of 'baked' texture maps, 3D visual effects on objects can be rendered in real time without having to recalculate elements such as materials, lighting, and shadows.