This repository contains C programs for important numerical methods used to find roots of equations and perform polynomial operations.
Each method includes an overview, core concepts, formulas, and key takeaways for strong conceptual understanding.
- Bisection Method
- Secant Method
- Newton–Raphson Method
- Fixed Point Iteration Method
- Synthetic Division
- Horner’s Method
- Lagrange Interpolation
- Newton’s Divided Difference Method
- Newton’s Forward Difference Method
- Newton’s Backward Difference Method
- Linear Least Square Method
- Polynomial Regression
- Exponential Regression
- Maxima and Minima of a Tabulated Function
- Derivative Using Forward Difference
- Derivative Using Backward Difference
- Derivative Using Central Difference
- Derivative Using Forward Divided Difference
- Derivative Using Backward Divided Difference
- Trapezoidal Rule
- Composite Trapezoidal Rule
- Simpson 1/3 Rule
- Composite Simpson 1/3 Rule
- Simpson 3/8 Rule
- Composite Simpson 3/8 Rule
- Gauss Elimination
- Gauss Elimination with Pivoting
- Gauss Jordan
- Matrix Inversion with Gauss Jordan
- Doolittle LU Decomposition
- Cholesky Method
- Jacobi Iteration
- Gauss Seidel
- Ordinary Differential Equations using Taylor’s
- Ordinary Differential Equations using Picard’s
- Ordinary Differential Equations using Euler’s
- Ordinary Differential Equations using Heun’s
- Ordinary Differential Equations using Runge-Kutta
- Shooting Method
- Elliptic Partial Differential Equation
- Poisson Equation
The Bisection Method is a bracketing method used to find roots of a continuous function by repeatedly dividing an interval into two halves and selecting the subinterval where the sign of the function changes.
- Requires a continuous function
- Initial interval [a, b] must satisfy
f(a).f(b) < 0 - Root lies between
aandb - Method is guaranteed to converge
- Simple and reliable
- Slow convergence
- Accuracy depends on tolerance
- Works for polynomial and non-polynomial equations
The Secant Method is an open method that approximates the root by using a straight line through two previous points instead of derivatives.
- Uses two initial guesses
- Does not require derivatives
- Faster than Bisection Method
- Convergence is not guaranteed for poor initial guesses
- Faster than bisection
- Division by zero must be handled
- Requires function continuity
- Good initial values improve convergence
The Newton–Raphson Method is a powerful root-finding technique that uses the function and its derivative to quickly converge to the root.
- Requires derivative of the function
- Uses tangent line approximation
- Very fast convergence (quadratic)
- Sensitive to initial guess
- Fastest among common methods
- Fails if f'(x) = 0
- Requires differentiable functions
- Widely used in engineering and science
The Fixed Point Iteration Method finds roots by rewriting the equation in the form
x = g(x)
and repeatedly substituting values until convergence.
- Equation must be rearranged to
x = g(x) - Iterative substitution is used
- Convergence depends on derivative of
g(x)
- Converges if
|g'(x)| < 1 - Simple but slow
- Poor choice of
g(x)causes divergence - Used as a foundation for advanced methods
Synthetic Division is a shortcut method for dividing a polynomial by a linear divisor of the form (x - r)
- Efficient alternative to long division
- Used to find quotient and remainder
- Helps verify roots of polynomials
If remainder = 0, then r is a root of the polynomial
- Works only for linear divisors
- Faster and simpler than long division
- Commonly used in root-finding problems
- Very useful in numerical analysis
Horner’s Method is an efficient technique used to evaluate the value of a polynomial at a given input.
It simplifies polynomial computation by reducing the number of arithmetic operations required, making it faster and more reliable than direct evaluation.
This method is especially useful in computer programming and numerical computation where performance and accuracy are important.
- Polynomial evaluation
- Algorithmic optimization
- Use of sequential computation
- Reduction of computational complexity
- Efficient use of memory and arithmetic operations
The key idea behind Horner’s Method is restructuring the polynomial evaluation process so that calculations are performed in a nested and sequential manner.
Instead of computing each power of the variable separately, the method:
- Processes coefficients step by step
- Reuses previous results
- Minimizes repeated calculations
This leads to a more efficient and elegant solution.
- Horner’s Method significantly reduces the number of multiplications required
- It is faster than direct polynomial evaluation
- The method works by evaluating the polynomial from highest degree to lowest
- It improves numerical stability in computations
- Time complexity is linear with respect to the degree of the polynomial
- Commonly used in numerical analysis and algorithm design
- Suitable for implementation using loops and arrays
Lagrange Interpolation is a numerical method used to estimate the value of a function by constructing a polynomial that passes through a given set of data points.
It is particularly useful when the exact functional relationship between variables is unknown but discrete data values are available.
This method provides a direct and systematic way to interpolate values within a known range.
- Interpolation using known data points
- Polynomial construction
- Numerical approximation
- Data-driven computation
- Accuracy within a defined interval
The key idea of Lagrange Interpolation is to construct a single polynomial that exactly fits all the given data points.
Each data point contributes a component to the final polynomial such that:
- The polynomial passes through every known point
- No system of equations needs to be solved
- The interpolation is based solely on available data values
This makes the method straightforward and easy to understand.
- The interpolating polynomial passes exactly through all given points
- No prior knowledge of the function is required
- Best suited for a small number of data points
- Accuracy decreases for higher-degree polynomials
- Adding new data points requires recomputing the polynomial
- Sensitive to rounding errors for large datasets
- Most effective for interpolation within the given data range
Newton’s Divided Difference Method is a numerical interpolation technique used to estimate unknown values based on a set of known data points that may not be equally spaced.
It constructs an interpolating polynomial incrementally, making it flexible and efficient for practical applications.
This method is particularly useful when new data points are added dynamically.
- Interpolation with unequal intervals
- Incremental polynomial construction
- Divided difference tables
- Data-based approximation
- Numerical stability
The key idea behind Newton’s Divided Difference Method is to build the interpolating polynomial step by step using divided differences.
Instead of forming the entire polynomial at once:
- The polynomial is constructed incrementally
- Each new data point adds a new term
- Previously computed values are reused
This makes the method efficient and adaptable.
- Works for both equally and unequally spaced data points
- Allows easy addition of new data points
- Requires less recomputation compared to some other methods
- More efficient than direct interpolation techniques
- Suitable for table-based computation
- Accuracy depends on the number and quality of data points
- Commonly implemented using arrays or tables
Newton’s Forward Difference Method is a numerical interpolation technique used to estimate unknown values of a function when the data points are equally spaced.
It is especially effective when the required value lies near the beginning of the data table.
This method is widely used in numerical analysis due to its systematic structure and ease of computation.
- Interpolation with equally spaced data
- Forward difference tables
- Incremental polynomial approximation
- Numerical estimation
- Table-based computation
The key idea of Newton’s Forward Difference Method is to use forward differences to progressively build an interpolating polynomial.
The method:
- Starts from the first data point
- Uses successive differences to approximate the function
- Builds the solution step by step
This structured approach makes the method efficient and easy to implement.
- Applicable only when data points are equally spaced
- Best suited for estimating values near the beginning of the table
- Requires construction of a forward difference table
- Accuracy improves with more data points
- Less efficient for values far from the starting point
- Sensitive to data irregularities
- Commonly used in numerical computation problems
Newton’s Backward Difference Method is a numerical interpolation technique used to estimate unknown values of a function when the data points are equally spaced.
It is particularly effective when the value to be interpolated lies near the end of the data table.
This method complements Newton’s Forward Difference Method and is widely used in numerical analysis.
- Interpolation using equally spaced data
- Backward difference tables
- Polynomial approximation
- Numerical estimation
- Table-based computation
The key idea behind Newton’s Backward Difference Method is to construct the interpolating polynomial using backward differences taken from the last data point.
The method:
- Starts interpolation from the end of the data table
- Uses successive backward differences
- Builds the polynomial incrementally
This approach improves accuracy for values near the last known data point.
- Applicable only for equally spaced data points
- Best suited for interpolation near the end of the table
- Requires construction of a backward difference table
- More accurate than forward method for values near the last data point
- Accuracy improves with more data points
- Sensitive to data spacing errors
- Often compared with forward difference method in exams
The Linear Least Square Method is a numerical technique used to find the best-fit straight line for a given set of experimental or observed data points.
When data does not lie perfectly on a straight line, this method helps determine a line that represents the overall trend of the data with minimum error.
The core idea of the Linear Least Square Method is to minimize the total error between the observed values and the values predicted by a straight-line model.
Instead of passing exactly through every data point, the method finds a line that balances the deviations in such a way that the sum of squared errors becomes as small as possible.
- Real-world data often contains errors, noise, or irregularities
- A straight line is assumed to represent the relationship between variables
- The method finds the line that gives the closest possible approximation
- Errors above and below the line are treated equally by squaring them
- The resulting line is called the best-fit line
- Used when the relationship between variables is approximately linear
- Works best when the number of data points is more than two
- Based on the principle of error minimization
- Helps in prediction and trend analysis
- Commonly used in engineering experiments and statistical studies
- Simple, efficient, and easy to implement
- Forms the foundation for advanced regression techniques
Polynomial Regression is a numerical and statistical technique used to model non-linear relationships between variables.
When data points do not follow a straight-line trend, polynomial regression helps fit a smooth curved line that better represents the underlying pattern in the data.
The core idea of polynomial regression is to approximate a set of data points using a polynomial curve of suitable degree.
Instead of forcing linear behavior, the method allows the curve to bend, making it more flexible in representing complex data trends.
The best-fit curve is obtained by minimizing the overall error between observed data and predicted values.
- Real-world data is often non-linear
- A polynomial curve is assumed to represent the data pattern
- Higher-degree terms allow the curve to adjust more accurately
- The curve does not pass through all points but follows the overall trend
- Errors are minimized using the least square principle
- Used when linear regression is not sufficient
- Degree of polynomial depends on data behavior
- Higher degree improves fitting but may cause overfitting
- Requires more computations than linear regression
- Widely used for curve fitting and approximation
- Sensitive to extreme data values
- Provides better accuracy for curved data trends
Exponential Regression is a numerical technique used to model data that shows rapid growth or decay.
It is especially useful when changes in the dependent variable increase or decrease at a proportional rate rather than a constant rate.
The core idea of exponential regression is to represent data using an exponential curve that closely follows the observed trend.
Since exponential behavior is non-linear in nature, the data is transformed into a form that allows the use of least square approximation.
The final curve provides a smooth and realistic representation of growth or decay patterns.
- Used for data showing exponential growth or decay
- The rate of change depends on the current value
- Data transformation helps simplify the fitting process
- Least square principle is applied after transformation
- The resulting curve captures natural growth trends
- Suitable for population growth and decay models
- Requires all observed values to be positive
- More sensitive to small errors in data
- Provides better results for rapidly changing data
- Commonly used in forecasting and prediction
- Plays an important role in scientific modeling
- Forms the basis for many advanced growth models
Maxima and Minima analysis deals with identifying the highest and lowest points of a function based on given tabulated data.
This technique is useful when the function is not available in analytical form and only discrete data values are known.
The core concept behind finding maxima and minima of a tabulated function is based on the rate of change of the data.
By examining how the function values increase or decrease between successive points, we can locate points where the trend changes direction.
Such change indicates the presence of a maximum or minimum value.
- Maxima and minima occur where the trend of data changes direction
- Increasing to decreasing trend indicates a maximum
- Decreasing to increasing trend indicates a minimum
- Finite differences are used to analyze the variation
- Works effectively for uniformly spaced data
- Applicable only when data is given in tabular form
- Assumes data points are closely and evenly spaced
- Accuracy depends on the quality of data
- Used when the explicit function is unknown
- Simple and computationally efficient
- Commonly applied in engineering experiments
- Helps in identifying critical points
Forward Difference Method is a numerical technique used to approximate the first derivative of a function using forward neighboring data points. It is mainly applied when the derivative is required at the beginning of a data table.
The method is based on replacing the derivative definition with a finite difference expression using values ahead of the point of interest. It works effectively when data points are uniformly spaced.
- Uses values ahead of the target point
- Suitable for the initial point of a dataset
- Based on finite difference approximation
- Accuracy improves with smaller step size
- Simple and easy to compute
- Requires equally spaced data
- Less accurate than central difference
- Error depends on step size
- Common in numerical differentiation
- Used in engineering computations
- Best for boundary points
- Forms the basis of higher-order formulas
Backward Difference Method approximates the first derivative using values behind the given point. It is commonly used at the end of a data table.
The derivative is approximated using finite differences of previous data values. It is particularly useful when forward values are unavailable.
- Uses values behind the target point
- Suitable for the last point of data
- Based on backward finite difference
- Works for uniformly spaced values
- Easy implementation
- Accuracy depends on spacing
- Less accurate than central difference
- Commonly used at boundaries
- Simple computational method
- Widely applied in numerical analysis
- Useful in practical data problems
- Step size affects error
Central Difference Method provides a more accurate approximation of the derivative by using data points on both sides of the target point.
It averages the forward and backward differences, giving better accuracy compared to one-sided methods.
- Uses both previous and next values
- More accurate than forward/backward
- Works for equally spaced data
- Error is smaller compared to one-sided methods
- Balanced approximation
- Not suitable at boundary points
- Requires uniform spacing
- Higher accuracy
- Common in physics simulations
- Used in numerical modeling
- Error decreases with smaller step size
- Preferred for interior points
Forward Divided Difference method estimates derivatives when data points are not equally spaced.
It uses Newton’s forward divided difference interpolation formula and differentiates it to obtain the derivative.
- Works for unequal spacing
- Based on interpolation polynomial
- Suitable near the beginning
- Flexible for irregular data
- More general approach
- Computationally intensive
- Requires divided difference table
- Useful in real-world data
- More flexible than finite difference
- Applied in interpolation problems
- Handles non-uniform intervals
- Accuracy depends on polynomial degree
Backward Divided Difference method approximates derivatives for unequally spaced data near the end of the dataset.
Based on Newton’s backward divided difference interpolation polynomial.
- Suitable for irregular spacing
- Used near the end of table
- Derived from interpolation formula
- Flexible approach
- Useful for real data
- Requires construction of divided difference table
- Handles non-uniform intervals
- Slightly complex calculations
- Useful in applied mathematics
- More adaptable than simple finite differences
- Error depends on interpolation order
- Important in curve fitting
Trapezoidal Rule is a numerical integration method that approximates the area under a curve using trapezoids.
The region under the curve is divided into trapezoidal sections and their areas are summed.
- Approximates area using trapezoids
- Simple numerical integration method
- Suitable for small intervals
- Works for tabulated data
- Linear approximation
- Requires function values at endpoints
- Moderate accuracy
- Easy to implement
- Error decreases with smaller intervals
- Widely used in engineering
- Applicable to definite integrals
- Basis for composite rule
Composite Trapezoidal Rule improves accuracy by dividing the interval into multiple subintervals.
Applies trapezoidal rule repeatedly over smaller intervals.
- Divides interval into equal parts
- Improves accuracy
- Suitable for large intervals
- Reduces error
- Works for tabulated values
- Requires equally spaced intervals
- Accuracy increases with more subintervals
- Simple extension of basic rule
- Useful in scientific computation
- Applied in numerical integration
- Error proportional to step size squared
- Computationally efficient
Simpson’s 1/3 Rule approximates integrals using parabolic arcs instead of straight lines.
Fits a second-degree polynomial between three consecutive points.
- Uses quadratic approximation
- More accurate than trapezoidal
- Requires even number of intervals
- Suitable for smooth curves
- Based on interpolation
- Needs equally spaced data
- Requires odd number of data points
- Higher accuracy
- Common in physics and engineering
- Error proportional to fourth derivative
- Efficient numerical integration
- Preferred over trapezoidal rule
Applies Simpson’s 1/3 Rule over multiple subintervals for better accuracy.
Divides the interval into even number of subintervals and applies the rule repeatedly.
- Improved precision
- Even number of intervals required
- Quadratic fitting in each pair
- Efficient and accurate
- Suitable for smooth data
- Requires uniform spacing
- More accurate than composite trapezoidal
- Widely used
- Error reduces significantly
- Efficient for practical problems
- Used in numerical computation
- Important integration method
Simpson’s 3/8 Rule approximates integration using cubic polynomials.
Fits a third-degree polynomial through four consecutive points.
- Uses cubic approximation
- Requires multiple of three intervals
- More accurate for certain functions
- Polynomial-based integration
- Alternative to 1/3 rule
- Requires equal spacing
- Slightly more computation
- Useful for specific interval counts
- Good for smooth functions
- Error depends on fourth derivative
- Applied in engineering mathematics
- Extension of Simpson methods
Extends Simpson’s 3/8 Rule across multiple sets of three subintervals.
Applies cubic approximation repeatedly for improved accuracy.
- Multiple of three intervals
- Higher accuracy
- Suitable for long intervals
- Works with uniform spacing
- Efficient integration method
- Requires structured interval division
- Good precision
- Slightly complex
- Useful in scientific problems
- Reduces approximation error
- Alternative to composite 1/3
- Based on cubic fitting
Gauss Elimination is a direct method used to solve systems of linear equations.
Transforms the coefficient matrix into upper triangular form using row operations.
- Forward elimination
- Back substitution
- Systematic row operations
- Efficient for small systems
- Deterministic solution method
- Requires non-zero pivot elements
- Sensitive to rounding errors
- Simple algorithm
- Widely used in linear algebra
- Basis for advanced methods
- Computational cost increases with size
- Used in engineering analysis
An improved version of Gauss Elimination that reduces numerical instability.
Swaps rows to place the largest pivot element in position.
- Improves numerical stability
- Reduces rounding error
- Uses partial pivoting
- More reliable
- Preferred for large systems
- Prevents division by small numbers
- Increases accuracy
- Slightly more computation
- Common in scientific computing
- Essential for ill-conditioned systems
- Enhances reliability
- Standard practical approach
Gauss Jordan Method solves linear systems by converting matrix into reduced row echelon form.
Eliminates variables both above and below pivot positions.
- Direct solution without back substitution
- Converts matrix to identity form
- Systematic row operations
- Suitable for small systems
- Efficient for inversion
- More computation than Gauss elimination
- Used for matrix inversion
- Produces exact reduced form
- Useful in theoretical work
- Sensitive to rounding
- Straightforward algorithm
- Common academic method
Uses Gauss Jordan elimination to compute the inverse of a matrix.
Augments matrix with identity matrix and performs row operations until identity is obtained on left.
- Augmented matrix method
- Converts left to identity
- Right becomes inverse
- Direct inversion technique
- Based on row operations
- Matrix must be non-singular
- Computationally intensive
- Useful in solving systems
- Accuracy depends on pivoting
- Common in linear algebra
- Applied in engineering
- Important numerical method
Doolittle Method decomposes a matrix into Lower (L) and Upper (U) triangular matrices.
The diagonal elements of L are taken as 1 and system is solved in two stages.
- Factorization method
- Simplifies repeated solutions
- Efficient for multiple right-hand sides
- Structured approach
- Used in linear systems
- Requires non-singular matrix
- Reduces computation effort
- Useful in engineering simulations
- Stable for well-conditioned matrices
- Common in numerical methods
- Saves time for repeated problems
- Basis for advanced decompositions
Cholesky Method is used to solve linear systems where the matrix is symmetric and positive definite.
Decomposes matrix into product of a lower triangular matrix and its transpose.
- Applicable to special matrices
- Efficient and stable
- Reduces computation
- Uses square roots
- Faster than LU
- Matrix must be symmetric positive definite
- More efficient than general LU
- Widely used in optimization
- Less storage required
- Numerically stable
- Important in engineering problems
- Common in scientific computing
Jacobi Method is an iterative method for solving systems of linear equations.
Each variable is computed using values from the previous iteration.
- Iterative approach
- Requires initial guess
- Suitable for diagonally dominant matrices
- Simple computation
- Convergence depends on matrix
- Slow convergence
- Easy to implement
- Works well for large sparse systems
- Parallelizable
- Requires convergence condition
- Used in numerical simulations
- Important iterative technique
Gauss Seidel Method improves upon Jacobi by using updated values immediately.
Uses newly calculated values within the same iteration step.
- Faster convergence than Jacobi
- Iterative method
- Requires initial approximation
- Works for diagonally dominant matrices
- Efficient for large systems
- Convergence condition required
- Sensitive to initial guess
- Widely used iterative solver
- Efficient memory usage
- Common in PDE solving
- Faster than Jacobi
- Important numerical method
Taylor’s Method solves ordinary differential equations using Taylor series expansion.
Expands the function in series form around a point.
- Uses higher-order derivatives
- Accurate method
- Requires derivative computation
- Series-based approach
- Suitable for small intervals
- Complex for higher orders
- Accurate for smooth functions
- Computationally intensive
- Requires derivative expressions
- Used in theoretical problems
- High precision possible
- Basis for advanced solvers
Picard’s Method is an iterative technique for solving ODEs.
Uses successive approximations based on integral form.
- Iterative refinement
- Converges to exact solution
- Based on integral equation
- Theoretical importance
- Used for proof of existence
- Slow convergence
- Useful in analysis
- Requires initial function
- Provides approximate solutions
- Rarely used for computation
- Important in theory
- Foundation for iterative methods
Euler’s Method is the simplest numerical method for solving ODEs.
Uses tangent line approximation to advance step by step.
- First-order method
- Simple and fast
- Requires initial value
- Step-by-step calculation
- Low accuracy
- Error proportional to step size
- Easy implementation
- Suitable for learning
- Less accurate
- Basis for improved methods
- Used in simple models
- Computationally inexpensive
Heun’s Method is an improved Euler method using average slope.
Calculates predictor and corrector slopes.
- Predictor-corrector method
- More accurate than Euler
- Uses average slope
- Second-order accuracy
- Simple improvement
- Requires two slope evaluations
- Better stability
- Moderate accuracy
- Used in practical problems
- Improved convergence
- Efficient method
- Common in engineering
Runge-Kutta Methods provide high-accuracy solutions to ODEs without computing higher derivatives.
Uses weighted average of slopes at different points within interval.
- Fourth-order method common
- High accuracy
- No higher derivatives needed
- Stable and reliable
- Widely used
- More computation per step
- Very accurate
- Standard method in practice
- Suitable for nonlinear equations
- Efficient and stable
- Common in simulations
- Foundation for advanced solvers
Shooting Method solves boundary value problems by converting them into initial value problems.
Guesses missing initial conditions and iteratively adjusts them.
- Converts BVP to IVP
- Requires iteration
- Uses ODE solver internally
- Suitable for second-order equations
- Trial-and-error approach
- Requires good initial guess
- May face convergence issues
- Common in physics
- Used in heat transfer problems
- Iterative technique
- Combines root-finding methods
- Effective for smooth problems
Elliptic PDEs describe steady-state phenomena such as heat distribution.
No time dependence; solution depends on boundary conditions.
- Represents equilibrium state
- No time variable
- Requires boundary conditions
- Common in Laplace equation
- Smooth solutions
- Used in steady heat flow
- Boundary value problems
- Requires numerical discretization
- Finite difference method common
- Important in engineering
- Stable solutions
- Fundamental PDE type
Poisson Equation is a type of elliptic PDE with a source term.
Extends Laplace equation by including a forcing function.
- Includes source term
- Used in electrostatics
- Models heat with sources
- Boundary value problem
- Requires numerical methods
- Special case of elliptic PDE
- Widely used in physics
- Solved using finite difference or finite element
- Requires boundary conditions
- Important in potential theory
- Applied in engineering
- Fundamental PDE model
✨ From theory to computation — mastering the art of numerical analysis. ✨