- This repo contains some computer graphics development(CGD) projects, using OpenGL and techniques like ray tracing(especially path tracing)
- It aims to share my work on such things and how to apply math to solve real-world problems
- Also I will paste some explanations about some parts of the process
| GLSL Keyword | What it does |
|---|---|
in (vertex) |
Input from vertex buffer layout. |
out (vertex) |
Output passed to fragment shader. |
in (fragment) |
Input interpolated from vertex shader outputs. |
uniform |
External, constant input set by CPU-side OpenGL. |
gl_Position |
Built-in; sets the final position of a vertex. |
layout(location = N) |
Explicitly binds input/output locations. |
-
Vertex shader:
- Receives per-vertex data (in from buffer)
- Applies transforms -> gl_Position
- Outputs extra data (out) for the fragment shader
-
Fragment shader:
- Receives interpolated data (in)
- Looks up textures, lighting, etc
- Outputs vec4 color
-
CPU-side:
- Binds shaders, sets uniforms
- Draws indexed vertices
- Swaps buffers
The rendering engine uses standard matrix transformations in OpenGL to position and orient 3D geometry in the scene. The core matrix pipeline follows the Model-View-Projection (MVP) pattern:
glm::mat4 mvp = projection * view * model;The model matrix transforms an object from local object space to world space. It includes transformations like:
- Translation – moves the object to a specific position in the world.
- Rotation – orients the object around one or more axes.
Scaling (not used in this code, but part of standard transformation)
Example:
glm::mat4 translation = glm::translate(glm::mat4(1.0f), glm::vec3(-1.0f, 0.0f, -3.0f));
glm::mat4 rotation = glm::rotate(glm::mat4(1.0f), rotationAngle, glm::vec3(1.0f, 0.0f, 0.0f));
glm::mat4 model = translation * rotation;The view matrix positions and orients the camera in the world. This is achieved using glm::lookAt, which constructs the view matrix from:
Target point (where the camera is looking)
- Up direction
Example:
glm::mat4 view = glm::lookAt(
cameraPos,
cameraPos + cameraFront,
cameraUp
);The projection matrix defines how the 3D scene is projected onto the 2D screen. This engine uses a perspective projection:
glm::mat4 projection = glm::perspective(
glm::radians(60.0f), // field of view
aspectRatio, // width / height
0.1f, // near plane
10.0f // far plane
);The final MVP matrix is computed as:
glm::mat4 mvp = projection * view * model;This composite matrix is sent to the shader to transform vertex positions from model space all the way to clip space for rasterization.
Modern OpenGL uses buffer objects to efficiently manage vertex data and rendering configuration. The main buffer types used in this engine are:
- VAO – Vertex Array Object
- VBO – Vertex Buffer Object
- IBO (also called EBO) – Index Buffer Object
- VBLO – Vertex Buffer Layout Object (not part of OpenGL core, but often used in abstraction layers)
A VBO stores vertex attribute data (such as position, color, normals, etc.) in GPU memory. This allows the GPU to access the data directly during rendering without needing to re-upload it from the CPU each frame.
Typical usage:
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);- GL_ARRAY_BUFFER is used for vertex attributes
- GL_STATIC_DRAW hints that the data won't change often
A VAO stores the configuration of vertex attribute pointers and buffer bindings. It acts like a container that remembers which VBOs and attribute layouts to use during drawing.
Typical usage:
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// Bind and configure VBO
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, stride, (void*)0);
glEnableVertexAttribArray(0);VAOs simplify rendering by letting you bind a single VAO before drawing, rather than setting up all attributes every frame.
An IBO (or EBO – Element Buffer Object) stores indices into the vertex array, allowing reuse of vertex data and reducing redundancy.
Typical usage:
glGenBuffers(1, &ibo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);During rendering:
glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_INT, 0);This draws geometry using indices from the IBO instead of relying on vertex order in the VBO.
A VBLO is not part of OpenGL itself but is commonly used in abstraction layers to describe how vertex attributes are structured inside a VBO.
A VBLO helps:
- Define the layout (e.g., position at offset 0, color at offset 12)
- Automate calls to glVertexAttribPointer
For example, a layout definition might look like:
layout.push<float>(3); // Position (x, y, z)
layout.push<float>(3); // Color (r, g, b)
layout.push<float>(2); // Texture coordinates (u, v)This layout would then be applied during VAO/VBO setup, simplifying attribute bindings.
This text explains the mathematics behind the Camera class implementation using GLM for view matrix computation and mouse-based rotation.
glm::mat4 Camera::getViewMatrix() const {
return glm::lookAt(m_position, m_position + m_direction, m_up);
}Generates the camera's view matrix using the glm::lookAt function.
- m_position: The position of the camera in world space
- m_direction: The forward direction the camera is looking at
- m_up: The world up vector to maintain orientation
The glm::lookAt function constructs a view matrix from:
- Eye (camera position): m_position
- Center (target look-at point): m_position + m_direction
- Up vector: m_up (if anything than y=1 is modified then we get a "swinging boat effect")
This matrix transforms world coordinates into camera/view space.
void Camera::mouseUpdate(const glm::vec2& newMousePosition) {
glm::vec2 delta = newMousePosition - m_oldMousePosition;
const float ROTATION_SPEED = 0.003f;
glm::vec3 toRotateAround = glm::cross(m_direction, m_up);
glm::mat4 rotator = glm::rotate(glm::mat4(1), -delta.x * ROTATION_SPEED, m_up) *
glm::rotate(glm::mat4(1), -delta.y * ROTATION_SPEED, toRotateAround);
m_direction = glm::mat3(rotator) * m_direction;
m_oldMousePosition = newMousePosition;
}Handles camera rotation based on the mouse's new position. Applies yaw and pitch to the direction vector.
Steps
- Calculate delta:
- delta = newMousePosition - m_oldMousePosition
- Yaw (horizontal rotation):
- Apply rotation around m_up (global up vector) by delta.x.
- Pitch (vertical rotation):
- Apply rotation around the right vector:
- toRotateAround = cross(m_direction, m_up)
- Apply rotation around the right vector:
- Build rotation matrix:
- Composite rotation is applied:
- rotator = rotate(yaw) * rotate(pitch)
- Composite rotation is applied:
- Update direction:
- Apply rotation to m_direction.
- Update previous mouse position.
Face culling improves rendering performance by discarding faces (triangles) that are not visible to the camera—typically the "back" faces(makes fragment shader do less work)
glEnable(GL_CULL_FACE);This tells OpenGL to discard one side of polygons during rendering.
glCullFace(GL_BACK); // Cull back-facing polygons (default)
glCullFace(GL_FRONT); // Cull front-facing polygons
glCullFace(GL_FRONT_AND_BACK); // Rarely used, discards all polygonsOpenGL determines if a face is front- or back-facing based on vertex winding order (the order vertices are defined in a triangle):
glFrontFace(GL_CCW); // Counter-clockwise vertices are front-facing (default)
glFrontFace(GL_CW); // Clockwise vertices are front-facingIf your models appear inside-out or invisible, you may need to switch the winding mode.