This project implements various Numerical Methods using C++. The implemented methods include:
- Jacobi Iterative Method
- Gauss-Seidel Iterative Method
- Gauss Elimination Method
- Gauss-Jordan Elimination
- LU Factorization
- Root Finding Methods (Bi-section, False Position, Secant, Newton-Raphson)
- Runge-Kutta Method (Solving Differential Equations)
- Matrix Inversion
๐ Numerical_Methods/ โโโ ๐ main.cpp # Main entry point โโโ ๐ 2107100.hpp # Header file 1 โโโ ๐ 2107086.hpp # Header file 2 โโโ ๐ 2107102.hpp # Header file 3 โโโ ๐ README.md # Documentation
- A C++ compiler (g++ recommended)
- Basic understanding of numerical methods
- Open a terminal and navigate to the project directory:
cd path/to/Numerical_Methods - Compile the program:
g++ -o numerical_methods main.cpp -std=c++11 - Run the compiled program:
./numerical_methods
The Jacobi Method is an iterative algorithm used to solve systems of linear equations. It starts with an initial guess and iteratively updates the solution using:
[ x_i^{(k+1)} = \frac{b_i - \sum_{j \neq i} a_{ij} x_j^{(k)}}{a_{ii}} ]
โ Advantages:
- Works well for diagonally dominant matrices.
- Parallelizable, as updates donโt depend on previous iterations.
โ Disadvantages:
- Slow convergence for large matrices.
- Might not converge if the matrix is not diagonally dominant.
An improvement over the Jacobi method. It updates each variable immediately, using the latest computed values:
[ x_i^{(k+1)} = \frac{b_i - \sum_{j < i} a_{ij} x_j^{(k+1)} - \sum_{j > i} a_{ij} x_j^{(k)}}{a_{ii}} ]
โ Advantages:
- Faster convergence than Jacobi.
- Uses previously updated values, improving accuracy.
โ Disadvantages:
- Sequential, so harder to parallelize.
- Might not converge for non-diagonally dominant matrices.
A direct method that systematically eliminates variables to solve linear equations using row operations.
1๏ธโฃ Convert the system into Upper Triangular Form.
2๏ธโฃ Perform back-substitution to get the solution.
โ Advantages:
- Efficient for small to medium-sized systems.
- Works for any square system.
โ Disadvantages:
- Computationally expensive (O(n^3)) for large matrices.
- Prone to numerical instability.
An extension of Gauss Elimination, where the system is transformed into Reduced Row Echelon Form (RREF).
1๏ธโฃ Convert the matrix into an identity matrix.
2๏ธโฃ The remaining matrix contains the solution.
โ Advantages:
- Finds exact solution directly.
- Can be used for matrix inversion.
โ Disadvantages:
- More computationally expensive than Gauss Elimination.
- Not practical for very large systems.
Decomposes a matrix A into lower (L) and upper (U) triangular matrices, where:
[ A = LU ]
1๏ธโฃ Solve Ly = b using forward substitution.
2๏ธโฃ Solve Ux = y using back substitution.
โ Advantages:
- Efficient for multiple linear systems with the same A but different b.
- Reduces computational complexity.
โ Disadvantages:
- Requires matrix decomposition, which adds overhead.
- Not always stable for ill-conditioned matrices.
Finding the roots of a function ( f(x) = 0 ) is essential in numerical methods.
- Bracket-based method: Requires two points where ( f(a) \cdot f(b) < 0 ).
- Repeatedly halves the interval until the root is found.
โ Always converges, but slow.
- Uses linear interpolation to approximate the root.
โ Faster than Bi-section, but may converge slowly for some functions.
- Uses two previous points to approximate the derivative.
โ No need for derivative calculations.
- Uses the first derivative to iteratively refine the root:
[ x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} ]
โ Fast convergence for well-behaved functions.
โ Fails for non-differentiable functions.
A powerful method for solving Ordinary Differential Equations (ODEs) of the form:
[ \frac{dy}{dx} = f(x, y) ]
4th Order Runge-Kutta uses:
[ y_{n+1} = y_n + \frac{h}{6} (k_1 + 2k_2 + 2k_3 + k_4) ]
where:
[ k_1 = h f(x_n, y_n) ] [ k_2 = h f(x_n + h/2, y_n + k_1/2) ] [ k_3 = h f(x_n + h/2, y_n + k_2/2) ] [ k_4 = h f(x_n + h, y_n + k_3) ]
โ
More accurate than Eulerโs Method.
โ
Suitable for non-linear ODEs.
โ Computationally expensive.
Used to find the inverse of a matrix ( A^{-1} ), such that:
[ A A^{-1} = I ]
Using Gauss-Jordan Method:
1๏ธโฃ Append identity matrix to ( A ).
2๏ธโฃ Perform row operations to convert ( A ) into identity matrix.
3๏ธโฃ The result is ( A^{-1} ).
โ Used for solving simultaneous equations.
โ Computationally expensive for large matrices.
These numerical methods are widely used in engineering and science for solving complex equations efficiently.
If you have any suggestions or contributions, feel free to fork this project and create a pull request! ๐
๐ Numerical Methods in Engineering
๐ Applied Mathematics Textbooks
This project is open-source. Feel free to modify, share, and contribute!