This project was developed collaboratively by Marius, Alonso, and Charlie for the CS-341 – Computer Graphics (2025) course at EPFL, titled Mystical Forest. The project is a real-time WebGL scene focused on atmosphere and immersion, combining several graphics techniques such as screen-space ambient occlusion, particle-based fire, fog, bloom, and L-system–based procedural tree generation.
This README serves as both project documentation and a technical report. It presents the motivation, implementation details, validation results, and challenges encountered during development, along with all original visuals and media produced for the project.
This project is based on the CS-341 framework provided by the course staff. The original framework authors are Michele Vidulis, Vicky Chappuis, and Krzysztof Lis. In accordance with the license, proper attribution is provided here.
All additional code, assets, and media in this repository were produced by the project authors as part of the CS-341 coursework. The project is shared publicly in accordance with the course guidelines.
For more information about the course, see the CS-341 – Computer Graphics webpage.
Mystical ForestImagine yourself in a forest at night surrounded by the stars. The fog is setting in. You decide to make campfire in an empty patch. You cannot help yourself but to feel warm while be surrounded by the stars, the trees, and the mountains. Truly a mystical experience in this dim atmosphere. This is our project Mystical Forest. This project implements different features like Screen Space Ambient Occlusion, Particle Effects, Fog, Procedurally Generated Trees from L-Systems, and Bloom.
Overview of our componentsWe generated the trees procedurally using L-systems. This allows varying trees that share a similar underlaying structure. It also allows to "grow" trees and show them in different stages of developpment. The code is easily modifiable to produce other plants, like potentially bushes and other types of trees.
We also added a fog to create a more immersive atmosphere. The fog is implemented using a fragment shader that calculates the fog density based on the distance from the camera, and applies it to the scene. This gives a sense of depth and mystery to the forest, which is really cool to see.
To add even more to the atmosphere, and because it's an interesting effect, we added screen-space ambient occlusion. This creates a more realistic lighting by simulating how light interacts with nearby environment, especially in the corners and crevices that we have on the trees. It gives a nice depth to the scene.
We added a fire using particles. the fire adds to the cozy atmosphere and works very well with the bloom effect also added. It brings some light to the scene that is otherwise rendered at night. The particles don't emit light but we placed a light inside the fire
Finally, we added bloom. Bloom finishes the effects on the scene. It makes the fire smoother and allows to better see the flames. This makes for a very cozy scene that has a peaceful vibe. And also with the exposition parameter this allows for a lot of different vibes to the scene.
The terrain was made by hand in Blender, and we used the given shader to apply texture to it.
| Feature | Adapted Points | Status |
|---|---|---|
| Ambient Occlusion | 17 | Completed |
| Particle Effects | 17 | Completed |
| Fog | 4 | Completed |
| L-Systems for Procedural Scene Generation | 8 | Completed |
| Bloom | 4 | Completed |
The ambient occlusion is implemented in screen space by using a fragment shader that samples the depth buffer to compute the occlusion factor: it compares the depth of the current fragment with the depth of nearby fragments to determine how much light is occluded. We then apply that factor to the ambient light of the scene to reduce it in areas that should be dimmed. A bias is applied to deal with acnee, and a later blur pass smoothes the result to avoid harsh edges.
The occlusion factor is computed in three passes:
-
G-buffer pass: we render the scene to a simple G-buffer with three textures in different color attachement (position, normal and albedo). We couldn't use the syntax given in the OpenGL tutorial since we are working in WEBGL1.0 instead of WEBGL2.0. This is done in
gbuffer_sr.js,gbuffer.vert.glslandgbuffer.frag.glsl. -
SSAO pass: this pass computes the ambient occlusion factor. In
ssao_sr.js, we generate a kernel of random samples and a random rotation texture and pass it to the shaders. The vertex shader is a simple buffer-to-screen shader, but the fragment shaderssao.frag.glslcomputes the occulsion factor. It iterates on the random (rotationned) samples, gets the value of the G-buffer at that point, transforms it to screen-space and computes and only increments the occlusion factor if the depth of the sample is visible from the viewer's point of view. The occlusion factor is then normalized by the number of samples.There are multiple tweakable parameters to adjust the effect:
- kernel size: the number of samplse in the kernel. The more samples we have, the more accurate the result is, but also the more expensive it is to compute. We found that 64 samples was a good middle ground.
- radius: the radius in which the occlusion factor is computed. The larger the radius is, the larger the area of influence of the occlusion is. To avoid hard cuttoffs, the border should be smoothstep'd.
- bias: the bias is used to avoid acne like is done in other shadowing techniques.
- intensity: the intensity of the occlusion. It's just a power applied to the final occlusion factor to make it more or less pronounced.
-
(optional) Blur pass: this pass smoothes the result of the SSAO pass to avoid harsh edges. It applies a 4x4 box blur to the SSAO texture. This is done in
blur_sr.js,buffer_to_screen.vert.glslandblur.frag.glsl. We found that a box blur was sufficient for our needs since SSAO is already a discreet effect so the diffence between box blur and guassian blur was not visible.
After having computed the ambient occlusion factor, we integrate it to the scene by passing it to the Blinn-Phong and terrain shaders, and multiplying it with the ambient light component.
We crafted a simple scene to visualize SSAO. From left to right, it has a taurus, a spehere, a small cube, and a big cube.
-
G-buffer:
In the position texture, we have the green quadrant representing positions where
G-buffer position texture$y > 0$ and$x < 0$ , the yellow quadrant with$y > 0$ and$x > 0$ , the black quadrant with$y < 0$ and$x < 0$ , and the red quadrant with$y < 0$ and$x > 0$ .The normal texture just shows the normals of the scene, as usual.
G-buffer normal textureThe albedo texture shows the colors of the scene as they would appear wihtout any treatment. Everything is white.
G-buffer albedo texture -
SSAO: Rendering only the SSAO texture, we can see the occlusion factor computed by the shader. It is already "reversed" so the darker regions are the ones being the most occluded. We are ony rendering the occlusion factor throught the red channel, which is why the image is red.
We can clearly see that the regions of our objects that are closest to the ground are being occluded. There is some artifacting because of the repetition of the kernel samples, but it will later be smoothed out by the blur pass.
Visualization of the occlusion factor through the red channelWe zoom in on the cubes to see the effect and artifacts more clearly. We can see the occlusion between the two cubes, which is pretty nice.
Zoom on the two cubes -
Blur: The blur pass smoothes the result of the SSAO pass. We can see that the artifacts are gone, and the occlusion factor is more uniform.
SSAO with blur enabledZomming in again on the cubes, we can see the effect of the blur pass.
Zoom on the two cubes with blur enabled
Finally, we can try tweaking the parameters of the SSAO pass to see how they affect the result. We are rendering the SSAO blurred texture.
-
Radius: Increasing the radius makes the occlusion factor more pronounced. Here, it is exagerated to show the effect. This also has the side-effect of having occlusion in weird places, like the top of the sphere, which is not occluded by anything. This is because the radius is too large and samples are taken from too far away.
SSAO with increased radius -
Bias: Increasing the bias reduces the acne (which we don't have much of anyway), but also reduces the occlusion factor when it is too high.
SSAO with increased bias -
Intensity: Increasing the intensity makes the occlusion factor more pronounced. Here, it is exagerated to show the effect.
SSAO with increased intensity
Here is the final result with all the other shaders, with all the parameters set to default values that seemed appropriate.
Scene with SSAO Scene without SSAO for comparisionThis implementation simulates dynamic fire particles using instanced, textured quads that evolve and fade over time. Each particle is rendered as a camera-facing billboard, with GPU rendering and CPU simulation. A fire emitter spawns and evolves over time, with configurable parameters for size, lifespan, emission rate, color options, speed, fire radius.
- A FireEmitter extends a general ParticleEmitter class.
- A FireEmitter manages particle state: position, velocity, life, color, spawn radius, size.
- On each frame:
- Dead particles are culled.
- New ones are spawned based on emissionRate.
- Attributes are updated using remaining lifetime (RGB and alpha).
- Results are exported into two arrays: positions (vec4: x, y, z, size) and colors (RGBA uint8).
- A base quad (2D unit square) is instanced per particle.
- Quads are billboarded in the vertex shader.
- Alpha blending is enabled for additive effects this works well for flames and will benefit from bloom.
- CPU simulation: because it is simpler than compute shaders and sufficient for our use.
In the above video, we observe a basic fire simulation and a preview of all available parameters and how they influence the fire's behavior. Reducing the particle lifespan creates a flickering effect like fireworks. This happens because each particle is assigned a lifetime upon creation. When we shorten the lifespan, the color calculations based on (life / maxLife) can yield values greater than one, since life may exceed maxLife. It's not a major issue, as the effect normalizes quickly.
In the above video we can see how the fire integrates to the main scene. Everything looks pretty well together. Trees are overly bright on top but that is not an issue of the fire and has been fixed since.
In the above video we can see fog getting over the particles if there is a lot of fog (when we can see far withohitting the ground) This is a known issue and we don't really know how to fix it easily.
The fog is implemented through a shader. The fog has its own frame buffer where it loads one texture produced by one pair of fragment and vertex shaders.
In the shaders, we calculate the fog factor, which is a number between 0 and 1, where 0 represents full fog and 1 represents 0 no fog. To calculate, we use the distance of the vertex from the camera and the intensity of the fog tends towards 0 in a squared exponentially.
Within the final mixer shader, the color of the fragement is mixed with the fog's gray color the fog factor that is extracted from the fog texture.
Fog Height In these images, the height of the fog varies. The difference between the fragments that are mixed with the fog texture and those that are not. Fog Opacity In these images, the oppacity of the fog varies. It can obscure as much as possible but also be as transparent as needed.The trees are procedurally generated with Lindenmayer Systems, otherwise known as L-Systems. By predetermining an alphabet from which we can produces axioms that can be recursively developped through predetermine rules, tree can be "grown" from a string. To generate the trees in the scene, we chose a random spot away from the campfire and randomly choose a predetermined starting axiom. From there, the rules of system are applied a random amount of times.
L-Systems are defined as a tuple l_system.js, which are called within the scene to generate the string defining the tree.
To generate the tree, meshes have to be made, which are defined in tree_systems.js. The meshes for the branches are polygonal based prisms and the meshes for the leaves are two triangular faces at a right angle.
To be able to correctly place all the meshes, functions that rotate and transform the meshes were also defined making the code clearer.
To optimize the number of objects, a function that would merge meshes into one mesh was made.
To generate the trees mesh, the final string must be parsed, therefore the alphabet must map to some action. The alphabet is parsed as:
-
${L, B}$ : They represent a branch and the difference between the two is the branch represented by B could continue to grow if the production rule is applied. -
${X, Y, Z}$ : They represent a rotation from the base branch that they come from. The difference of the three symbols is the exact angle that the next branch will take. -
${[,]}$ : They represent a sub tree that would have a smaller base size and represent a new state from which new branches can come from.
Leaves would be randomly placed on the upper half of branches.
Using these rules, the string would be parsed and the corresponding branches and leaves would be generated and placed. To correctly be able to come back to an old position, a stack of the previous positions and rotations done would be kept.
After generating a list of branch meshes and a seperate list of the leaf meshes they would both be merged into two collective meshes that would be added to scene with the wood material and the leaf material.
In the following images you can see the progression and "growth" of a tree, the initial axiom in these image is simply B. From the single character we can progress to the first step by applying the production rule B -> L[XB][YB][ZB][B].
Also the leaves, are randomly placed in the upper half of a given branch. Since the leaves are generated at each instance,
this generated small differences in each instance of a tree, even if they are generated from the same axiom and have the same depth.
In the following image you can see different trees surronding the campfire, some are different from one another due to the different starting axioms.
This implementation includes a bloom effect. Bloom is computed by first thresholding the bright values of the screen and storing them in a separate texture. This texture is then blurred using a Gaussian kernel multiple times to create a soft glow. Finally, the blurred texture is additively blended back with the original image to produce the final effect.
To enhance the visual quality and prevent overly bright areas from burning out, we also implement tone mapping. This step compresses high dynamic range values into a displayable range, ensuring a more natural and balanced appearance.
- Bloom is applied as a post-processing step after rendering the scene.
- It captures bright fragments (typically from fire particles) and blurs them across neighboring pixels.
- This creates a glowing aura that enhances the perceived brightness and softness of the fire.
- Combined with alpha blending, bloom adds volume and visual depth to the flames.
- A simple threshold-based bloom implementation is used to keep performance reasonable.
- We use a Gaussian blurring kernel to get a better render than with a box kernel.
- We downsample the thresholded texture to get better performances without losing much quality.
- We use and exponential tone mapping with an exposition parameter to allow for customization.
As a small additional component, we implemented HDR rendering with tone mapping to bring values back into the [0,1] range. The equation we used is:
This introduces an exposure parameter, which is very useful for achieving a cinematic look. It's not strictly required for bloom, but it greatly improves the result—without it, the image tends to appear overexposed, with all bloomed pixels turning completely white.
The OpenGL tutorials given were straighforward enough to not lose us on wrong paths. We did struggle with the framework and general implementation but we did not try anything that did not pan out with (a lot of) debugging.
At the beginning there was a steep learning curve for the framework which took a bit of time to adapt. For example, we had issues the frame buffer being shifted due to lack of understanding of how the frameworks parameters affected the system. Fortunately with time, we understood how the framework worked and how to write code for it. Framebuffer shifted during
For the L-Systems, we had dificulty understanding how to correctly rotate and position the branches within the trees and how to parse correctly the generated string. This was the major barrier for developpement. Also at the begining, each branch was a seperate mesh, this would mean when rendering complex trees, it would overwhelm the computer. This challenge was solved by merging the meshed into one mesh.
On the particle side, we encountered issues with GPU instancing. It was difficult to debug because only one triangle was being rendered, there were no errors in the console, and the shaders were not the source of the problem. In the end, we discovered that a required Regl extension for GPU instancing was missing.
For the bloom effect, the main challenge was getting multiple shaders to work together, but it wasn't the most difficult issue we faced overall.
| Name | Week 1 | Week 2 | Week 3 | Week 4 | Week 5 | Week 6 | Week 7 | Total |
|---|---|---|---|---|---|---|---|---|
| Alonso | 3h | 2h | 4h | 14h | 4h | 7h | 15h | 49h |
| Charlie | 2h | 0h | 8h | 12h | 8h | 3h | 9h | 42h |
| Marius | 2h | 0h | 2h | 9h30 | 8h | 4h | 18h30 | 44h |
| Name | Contribution |
|---|---|
| Alonso | 1/3 |
| Charlie | 1/3 |
| Marius | 1/3 |
We have no comments. We worked well together and we are happy with the result.
- Mountain Scene by Alonso Coaguila, SCIPER: 339718
- Low Poly Campfire Free low-poly 3D model
- MographPlus (2017) Tutorial No.62 : Rendering realistic Explosion and Smoke in Arnold for 3ds Max (Arnold Volume)
- OGLDEV (2025) Particle System Using The Compute Shader // Intermediate OpenGL Series
- OpenGL-Tutorial/Particles
- LearnOpenGL/Particles
- Regl Example Gallery, instance-triangle.js
- OGLDEV (2022) Mastering Fog Rendering in OpenGL: Adding Depth and Atmosphere to Your Graphics (part 2/2)
- Legakis, J. (1998) Fast Multi Layer Fog (SIGGRAPH '98: ACM SIGGRAPH 98)





























