🎉 Welcome to Axiome Blog, a blog about creative development and vulgarization of 3D theory and more.
Articles
Raymarching Material 101 in three.js

Raymarching Material 101: Mastering An Advanced three.js Technique

Abstract

Ever found yourself on ShaderToy, awestruck by the complex visuals created with shaders and wondered how to incorporate such shaders into your web-based three.js projects? You're about to embark on a journey that explores the powerful technique of raymarching, a cornerstone of many stunning 3D web experiences. This guide will navigate you through integrating both ShaderToy shaders and your own creations into the three.js framework, enriching your projects with unparalleled visual depth.

Disclaimer: While I navigate the realms of three.js and raymarching with keen interest, I consider myself at the beginning of this journey, mastering enough to integrate raymarching into three.js effectively. For those looking to dive deeper into raymarching's theoretical underpinnings, I recommend resources such as TheArtOfCode's YouTube Channel (opens in a new tab) or Michael Walczyk's blog (opens in a new tab). Our focus here is squarely on the practical integration of these techniques within the vibrant ecosystem of three.js.

I welcome your feedback, questions, and insights. If you've encountered any uncertainties, spotted typos, or simply wish to share your thoughts, reaching out on Twitter (opens in a new tab) is greatly appreciated. Your interaction fuels this shared exploration of 3D web development and creative coding.

Thank you for joining this explorative venture into enhancing three.js projects with the dynamic capabilities of raymarching and shaders. ✨

Process introduction 📐

Reminder: What is raymarching in first place?

It's a technique that involves view rays extrapolated from the camera, along which we are going to march forward, to see if there is intersections with specific surfaces. These specific surfaces are defined by SDF (a.k.a. Signed distance functions). A ray is composed of two properties: An origin (the starting point) and a direction. The raymarching algorithm will take in input these two components.

If all of this is really new to you, you may have trouble to follow along, because I don't go in depth about the raymarching theory. Go back to the abstract and follow the links I put for you !

How to get the rays properties?

Let's put aside how we get the rays for a moment, and remember about the rendering pipeline.

First, our vertex shader will run on each vertex of our primitive. Then, down the rendering pipeline, we get a lot of fragments. On each fragment, the fragment shader will be runned, with in inputs some interpolated values coming from the vertex shader.

GZO0M.png

In each fragment, we will run the raymarcher. So basically, in each fragment, we will get a different ray, that we will follow along. Then, we will test intersections inside the raymarching loop, and determine the pixel color that we need to render in this direction.

The plan to make our raymarcher work in the context of three.js is to calculate each ray in the vertex shader from the real three.js camera. Then, pass down to the fragment shader through varying the ray origin and the ray direction.

**Reminder: The fragment shader will interpolate the values passed in varying. For the ray origin, it is the camera position for everyone, so it will be a constant value. For the ray direction, it will be interpolated from the calculation done in the vertex shader. Each fragment will have a specific ray direction. It is exactly what we need to make the raymarcher work in our context ! **

A little drawing could help no ?

ray from camera.png

Each arrow you see on the picture is a ray. It is basically what will happen in the vertex shader. Each arrow will be the result of the vertex shader calculation. Then each arrow will be interpolated before feeding the fragment shader.

The formulas for the ray is very simple: The ray origin is the camera world position or the vertex local position depending of the raymarching effect you want. Taking the local vertex position will ensure that you have the same effect even if you move the sphere around in the scene. Taking the camera world position will make the effect inside different depending on the position of the object in the world space. There is no right or wrong, just matter of the effect you want to be performed in the sphere. The ray direction is given by rayDirection = normalize(vertexWorldPosition - cameraWorldPosition). Just make sure that you have every coordinates in the same space. We normalize the vector, to have a unit length.

Implementation ⚙️

Beginning of the implementation

Setting up the material

First, we need to create a shader material. We will pass the vertex and fragment shader as strings.

Add the update of the uniform

Don't forget to add the update part of the uTime uniforms in the render loop

Vertex shader

First we calculate the world position of the vertex by multiplying the modelMatrix to the vertex coordinates.

Then we prepare the varyings that will be interpolated before the fragment shader. vDirection is the ray direction from the camera to the vertex world position like we talked about in the previous section. vPosition is the ray origin. It is the position of the vertex in local space so we ensure that the effect is local to the sphere, and not depending on the position in world space or the camera in world space.

app.js
const material = new THREE.ShaderMaterial({
vertexShader: vertexShader,
fragmentShader: fragmentShader,
uniforms: {
uResolution: {
value: new THREE.Vector2(window.innerWidth, window.innerHeight),
},
uTime: {
value: 0,
},
},
});

const sphere = new THREE.Mesh(
new THREE.SphereGeometry(2, 32, 32),
material
);
sphere.position.setY(2);

this.scene.add(sphere);

Basically, it is what we are trying to build. raymarching explanation.png

Raymarching algorithm

Setting up the raymarcher

We will use TheArtOfCode raymarcher for it. Like I said, if you are not familiar with the code of the raymarching algorithm, check out his videos. Here, a part of the fragment shader.

Wire everything in the main of the fragment shader

Basically we are calculating a distance in a direction from an origin point. We are marching forward until we hit something in our abstract space, or until we are pass the maximum distance. We save this distance d.

We can then calculate the current position that we marched to by doing rayOrigin + rayDirection * d (We marched for a distance d along the ray direction, starting from the rayOrigin). We then ask for the light at the current position, which will be our color.

We do this for every ray generated by every fragment shader running. From there, we can determine every fragment color, which determine every pixel color for the mesh.

fragment.glsl
uniform float uTime;
uniform vec2 uResolution;

varying vec3 vPosition;
varying vec3 vDirection;
varying vec3 vNormal;

#define MAX_STEPS 100
#define MAX_DISTANCE 100.
#define SURFACE_DISTANCE .01


float smin( float a, float b, float k ) {
float h = clamp( 0.5+0.5*(b-a)/k, 0., 1. );
return mix( b, a, h ) - k*h*(1.0-h);
}

float getDistance(vec3 currentPosition) {
vec4 sphere = vec4(0., 3., 3., 0.5);

float sphereDistance = length(currentPosition - sphere.xyz) - sphere.w; // sphere.w === radius of the sphere

float planeDistance = dot(vec3(currentPosition.x, currentPosition.y - 2. - sin(currentPosition.x + uTime * 0.002) * 0.6 , currentPosition.z), normalize(vec3(0.,1.,0.)));
float safeDistance = smin(sphereDistance, planeDistance, 0.7);

return safeDistance;
}

vec3 getNormal(vec3 currentPosition) {
float d = getDistance(currentPosition);

vec2 epsilon = vec2(.01, 0.);

vec3 n = d - vec3(
getDistance(currentPosition - epsilon.xyy),
getDistance(currentPosition - epsilon.yxy),
getDistance(currentPosition - epsilon.yyx)
);

return normalize(n);
}


float getLight(vec3 currentPosition) {
vec3 lightPosition = vec3(0.,5.,6.);
lightPosition.xz += vec2(sin(uTime * 0.001 ), cos(uTime * 0.001)) * 2.;

vec3 lightVector = normalize(lightPosition - currentPosition);
vec3 normalVector = getNormal(currentPosition);


float diffuseLighting = clamp(dot(normalVector, lightVector),0.,1.);

float d = rayMarch(currentPosition + normalVector * SURFACE_DISTANCE * 2., lightVector);
//If there is an hit point and the hit point distance is shorter than the distance current point to the light, then we are occluded by the object we hit.
//The light is behind the object.
//Be careful, we need to kick out the point before the raymarching that is colliding with the plane, because we
//will go out of the raymarch loop too soon due to the collision with the plane and the min(dPlane, dSphere)
//So we add a little quantity along the normal to have the point just above and continue the raymarch

if(d < length(lightPosition - currentPosition)) diffuseLighting *= .1;

return diffuseLighting;
}

float rayMarch(vec3 rayOrigin, vec3 rayDirection) {
float distanceOrigin = 0.;

for(int i=0; i< MAX_STEPS; i++) {
vec3 currentPosition = rayOrigin + rayDirection * distanceOrigin;
float distanceScene = getDistance(currentPosition);
distanceOrigin += distanceScene;
if (distanceOrigin > MAX_DISTANCE || distanceScene < SURFACE_DISTANCE) break;
}

return distanceOrigin;
}

What to take off from this blog post ?

As we wrap up our exploration of raymarching and its application in three.js, I hope you find yourself equipped with new insights and inspired to push the boundaries of 3D web development further. This journey into the intricate world of three.js and raymarching algorithms is just the beginning of what you can achieve in the realm of digital creativity.

You can go on ShaderToy to experiment on raymarching algorithm. I did the experimentation with a cool sea shader I found here:

All you need to do is seek for the raymarching algorithm. Replace the main function by our main, with the calculation of the ray origin and ray direction relative to the three.js camera. And you can get this type of cool effect on your three.js sphere !

Initial shader

This is the initial shader that we will modify to make it work in three.js. The naming convention are different, we don't need the iMouse, the camera ray is impacted by our OrbitControls.

Go to the next step to get the full fragment shader that can be used in three.js.

Three.js fragment shader

Be sure to make these changes to bind the convention of three.js to the shadertoy shader.

  • uTime positions to change: 1 2
  • uMouse fixing the mouse: 1
  • uResolution positions to change: 1 2
  • mainImage to main: 1
  • gl_FragColor positions to change: 1 2
// afl_ext 2017-2024
// MIT License

// Use your mouse to move the camera around! Press the Left Mouse Button on the image to look around!

#define DRAG_MULT 0.38 // changes how much waves pull on the water
#define WATER_DEPTH 1.0 // how deep is the water
#define CAMERA_HEIGHT 1.5 // how high the camera should be
#define ITERATIONS_RAYMARCH 12 // waves iterations of raymarching
#define ITERATIONS_NORMAL 37 // waves iterations when calculating normals

#define NormalizedMouse (iMouse.xy / iResolution.xy) // normalize mouse coords

// Calculates wave value and its derivative,
// for the wave direction, position in space, wave frequency and time
vec2 wavedx(vec2 position, vec2 direction, float frequency, float timeshift) {
float x = dot(direction, position) * frequency + timeshift;
float wave = exp(sin(x) - 1.0);
float dx = wave * cos(x);
return vec2(wave, -dx);
}

// Calculates waves by summing octaves of various waves with various parameters
float getwaves(vec2 position, int iterations) {
float wavePhaseShift = length(position) * 0.1; // this is to avoid every octave having exactly the same phase everywhere
float iter = 0.0; // this will help generating well distributed wave directions
float frequency = 1.0; // frequency of the wave, this will change every iteration
float timeMultiplier = 2.0; // time multiplier for the wave, this will change every iteration
float weight = 1.0;// weight in final sum for the wave, this will change every iteration
float sumOfValues = 0.0; // will store final sum of values
float sumOfWeights = 0.0; // will store final sum of weights
for(int i=0; i < iterations; i++) {
// generate some wave direction that looks kind of random
vec2 p = vec2(sin(iter), cos(iter));

// calculate wave data
vec2 res = wavedx(position, p, frequency, iTime * timeMultiplier + wavePhaseShift);

// shift position around according to wave drag and derivative of the wave
position += p * res.y * weight * DRAG_MULT;

// add the results to sums
sumOfValues += res.x * weight;
sumOfWeights += weight;

// modify next octave ;
weight = mix(weight, 0.0, 0.2);
frequency *= 1.18;
timeMultiplier *= 1.07;

// add some kind of random value to make next wave look random too
iter += 1232.399963;
}
// calculate and return
return sumOfValues / sumOfWeights;
}

// Raymarches the ray from top water layer boundary to low water layer boundary
float raymarchwater(vec3 camera, vec3 start, vec3 end, float depth) {
vec3 pos = start;
vec3 dir = normalize(end - start);
for(int i=0; i < 64; i++) {
// the height is from 0 to -depth
float height = getwaves(pos.xz, ITERATIONS_RAYMARCH) * depth - depth;
// if the waves height almost nearly matches the ray height, assume its a hit and return the hit distance
if(height + 0.01 > pos.y) {
return distance(pos, camera);
}
// iterate forwards according to the height mismatch
pos += dir * (pos.y - height);
}
// if hit was not registered, just assume hit the top layer,
// this makes the raymarching faster and looks better at higher distances
return distance(start, camera);
}

// Calculate normal at point by calculating the height at the pos and 2 additional points very close to pos
vec3 normal(vec2 pos, float e, float depth) {
vec2 ex = vec2(e, 0);
float H = getwaves(pos.xy, ITERATIONS_NORMAL) * depth;
vec3 a = vec3(pos.x, H, pos.y);
return normalize(
cross(
a - vec3(pos.x - e, getwaves(pos.xy - ex.xy, ITERATIONS_NORMAL) * depth, pos.y),
a - vec3(pos.x, getwaves(pos.xy + ex.yx, ITERATIONS_NORMAL) * depth, pos.y + e)
)
);
}

// Helper function generating a rotation matrix around the axis by the angle
mat3 createRotationMatrixAxisAngle(vec3 axis, float angle) {
float s = sin(angle);
float c = cos(angle);
float oc = 1.0 - c;
return mat3(
oc * axis.x * axis.x + c, oc * axis.x * axis.y - axis.z * s, oc * axis.z * axis.x + axis.y * s,
oc * axis.x * axis.y + axis.z * s, oc * axis.y * axis.y + c, oc * axis.y * axis.z - axis.x * s,
oc * axis.z * axis.x - axis.y * s, oc * axis.y * axis.z + axis.x * s, oc * axis.z * axis.z + c
);
}

// Helper function that generates camera ray based on UV and mouse
vec3 getRay(vec2 fragCoord) {
vec2 uv = ((fragCoord.xy / iResolution.xy) * 2.0 - 1.0) * vec2(iResolution.x / iResolution.y, 1.0);
// for fisheye, uncomment following line and comment the next one
//vec3 proj = normalize(vec3(uv.x, uv.y, 1.0) + vec3(uv.x, uv.y, -1.0) * pow(length(uv), 2.0) * 0.05);
vec3 proj = normalize(vec3(uv.x, uv.y, 1.5));
if(iResolution.x < 600.0) {
return proj;
}
return createRotationMatrixAxisAngle(vec3(0.0, -1.0, 0.0), 3.0 * ((NormalizedMouse.x + 0.5) * 2.0 - 1.0))
* createRotationMatrixAxisAngle(vec3(1.0, 0.0, 0.0), 0.5 + 1.5 * (((NormalizedMouse.y == 0.0 ? 0.27 : NormalizedMouse.y) * 1.0) * 2.0 - 1.0))
* proj;
}

// Ray-Plane intersection checker
float intersectPlane(vec3 origin, vec3 direction, vec3 point, vec3 normal) {
return clamp(dot(point - origin, normal) / dot(direction, normal), -1.0, 9991999.0);
}

// Some very barebones but fast atmosphere approximation
vec3 extra_cheap_atmosphere(vec3 raydir, vec3 sundir) {
sundir.y = max(sundir.y, -0.07);
float special_trick = 1.0 / (raydir.y * 1.0 + 0.1);
float special_trick2 = 1.0 / (sundir.y * 11.0 + 1.0);
float raysundt = pow(abs(dot(sundir, raydir)), 2.0);
float sundt = pow(max(0.0, dot(sundir, raydir)), 8.0);
float mymie = sundt * special_trick * 0.2;
vec3 suncolor = mix(vec3(1.0), max(vec3(0.0), vec3(1.0) - vec3(5.5, 13.0, 22.4) / 22.4), special_trick2);
vec3 bluesky= vec3(5.5, 13.0, 22.4) / 22.4 * suncolor;
vec3 bluesky2 = max(vec3(0.0), bluesky - vec3(5.5, 13.0, 22.4) * 0.002 * (special_trick + -6.0 * sundir.y * sundir.y));
bluesky2 *= special_trick * (0.24 + raysundt * 0.24);
return bluesky2 * (1.0 + 1.0 * pow(1.0 - raydir.y, 3.0));
}

// Calculate where the sun should be, it will be moving around the sky
vec3 getSunDirection() {
return normalize(vec3(sin(iTime * 0.1), 1.0, cos(iTime * 0.1)));
}

// Get atmosphere color for given direction
vec3 getAtmosphere(vec3 dir) {
return extra_cheap_atmosphere(dir, getSunDirection()) * 0.5;
}

// Get sun color for given direction
float getSun(vec3 dir) {
return pow(max(0.0, dot(dir, getSunDirection())), 720.0) * 210.0;
}

// Great tonemapping function from my other shader: https://www.shadertoy.com/view/XsGfWV
vec3 aces_tonemap(vec3 color) {
mat3 m1 = mat3(
0.59719, 0.07600, 0.02840,
0.35458, 0.90834, 0.13383,
0.04823, 0.01566, 0.83777
);
mat3 m2 = mat3(
1.60475, -0.10208, -0.00327,
-0.53108, 1.10813, -0.07276,
-0.07367, -0.00605, 1.07602
);
vec3 v = m1 * color;
vec3 a = v * (v + 0.0245786) - 0.000090537;
vec3 b = v * (0.983729 * v + 0.4329510) + 0.238081;
return pow(clamp(m2 * (a / b), 0.0, 1.0), vec3(1.0 / 2.2));
}

// Main
void mainImage(out vec4 fragColor, in vec2 fragCoord) {
// get the ray
vec3 ray = getRay(fragCoord);
if(ray.y >= 0.0) {
// if ray.y is positive, render the sky
vec3 C = getAtmosphere(ray) + getSun(ray);
fragColor = vec4(aces_tonemap(C * 2.0),1.0);
return;
}

// now ray.y must be negative, water must be hit
// define water planes
vec3 waterPlaneHigh = vec3(0.0, 0.0, 0.0);
vec3 waterPlaneLow = vec3(0.0, -WATER_DEPTH, 0.0);

// define ray origin, moving around
vec3 origin = vec3(iTime * 0.2, CAMERA_HEIGHT, 1);

// calculate intersections and reconstruct positions
float highPlaneHit = intersectPlane(origin, ray, waterPlaneHigh, vec3(0.0, 1.0, 0.0));
float lowPlaneHit = intersectPlane(origin, ray, waterPlaneLow, vec3(0.0, 1.0, 0.0));
vec3 highHitPos = origin + ray * highPlaneHit;
vec3 lowHitPos = origin + ray * lowPlaneHit;

// raymatch water and reconstruct the hit pos
float dist = raymarchwater(origin, highHitPos, lowHitPos, WATER_DEPTH);
vec3 waterHitPos = origin + ray * dist;

// calculate normal at the hit position
vec3 N = normal(waterHitPos.xz, 0.01, WATER_DEPTH);

// smooth the normal with distance to avoid disturbing high frequency noise
N = mix(N, vec3(0.0, 1.0, 0.0), 0.8 * min(1.0, sqrt(dist*0.01) * 1.1));

// calculate fresnel coefficient
float fresnel = (0.04 + (1.0-0.04)*(pow(1.0 - max(0.0, dot(-N, ray)), 5.0)));

// reflect the ray and make sure it bounces up
vec3 R = normalize(reflect(ray, N));
R.y = abs(R.y);

// calculate the reflection and approximate subsurface scattering
vec3 reflection = getAtmosphere(R) + getSun(R);
vec3 scattering = vec3(0.0293, 0.0698, 0.1717) * 0.1 * (0.2 + (waterHitPos.y + WATER_DEPTH) / WATER_DEPTH);

// return the combined result
vec3 C = fresnel * reflection + scattering;
fragColor = vec4(aces_tonemap(C * 2.0), 1.0);
}

I have implemented it in a react three fiber project, you can see the result below:

The shader would be the same in three.js if you are a vanilla user or if you use any other type of three.js framework on top. Once you understand the translation steps, you can adapt this technique to any 3D context that you are working on.

Mastering Raymarching with Three.js

The journey through ShaderToy and the integration of raymarching algorithms into your three.js projects opens up a world of possibilities. The convergence of these technologies enables you to create more dynamic, interactive scenes and objects, enriching the user experience on the web. Embracing these techniques not only boosts your skillset but also places you at the forefront of innovative web development.

The path to mastery is paved with trial, error, and continuous learning. As you experiment and refine your approach, remember that every challenge is an opportunity to grow. The field of 3D web development is ever-evolving, and your contributions—no matter how small they might seem—are valuable steps forward in this exciting journey.

Looking Forward

I encourage you to view this post as a stepping stone towards achieving mastery in 3D web development with three.js. The principles of raymarching and their application within the three.js framework have the potential to transform your projects into immersive experiences that captivate and engage users.

Keep exploring, experimenting, and challenging yourself. The more you learn and apply, the more proficient you'll become. And as you progress, share your discoveries and creations with the community. Together, we can push the boundaries of what's possible in 3D web development. If you liked this article, you can also check out my other articles on the depth buffer and the indexed geometry in three.js and react three fiber.

Thank you for spending this time with me. I look forward to seeing where your creativity and technical prowess will take you next in the world of three.js and beyond.

Last updated on