Skip to content

upendrakda/Numerical-Methods

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

📘 Numerical Methods

This repository contains C programs for important numerical methods used to find roots of equations and perform polynomial operations.
Each method includes an overview, core concepts, formulas, and key takeaways for strong conceptual understanding.


📌 Contents

  1. Bisection Method
  2. Secant Method
  3. Newton–Raphson Method
  4. Fixed Point Iteration Method
  5. Synthetic Division
  6. Horner’s Method
  7. Lagrange Interpolation
  8. Newton’s Divided Difference Method
  9. Newton’s Forward Difference Method
  10. Newton’s Backward Difference Method
  11. Linear Least Square Method
  12. Polynomial Regression
  13. Exponential Regression
  14. Maxima and Minima of a Tabulated Function
  15. Derivative Using Forward Difference
  16. Derivative Using Backward Difference
  17. Derivative Using Central Difference
  18. Derivative Using Forward Divided Difference
  19. Derivative Using Backward Divided Difference
  20. Trapezoidal Rule
  21. Composite Trapezoidal Rule
  22. Simpson 1/3 Rule
  23. Composite Simpson 1/3 Rule
  24. Simpson 3/8 Rule
  25. Composite Simpson 3/8 Rule
  26. Gauss Elimination
  27. Gauss Elimination with Pivoting
  28. Gauss Jordan
  29. Matrix Inversion with Gauss Jordan
  30. Doolittle LU Decomposition
  31. Cholesky Method
  32. Jacobi Iteration
  33. Gauss Seidel
  34. Ordinary Differential Equations using Taylor’s
  35. Ordinary Differential Equations using Picard’s
  36. Ordinary Differential Equations using Euler’s
  37. Ordinary Differential Equations using Heun’s
  38. Ordinary Differential Equations using Runge-Kutta
  39. Shooting Method
  40. Elliptic Partial Differential Equation
  41. Poisson Equation

📘 Bisection Method

🔍 Overview

The Bisection Method is a bracketing method used to find roots of a continuous function by repeatedly dividing an interval into two halves and selecting the subinterval where the sign of the function changes.


🧠 Core Concepts

  • Requires a continuous function
  • Initial interval [a, b] must satisfy
    f(a).f(b) < 0
  • Root lies between a and b
  • Method is guaranteed to converge

⭐ Important Points

  • Simple and reliable
  • Slow convergence
  • Accuracy depends on tolerance
  • Works for polynomial and non-polynomial equations

⬆️ Get Back to Contents


📘 Secant Method

🔍 Overview

The Secant Method is an open method that approximates the root by using a straight line through two previous points instead of derivatives.


🧠 Core Concepts

  • Uses two initial guesses
  • Does not require derivatives
  • Faster than Bisection Method
  • Convergence is not guaranteed for poor initial guesses

⭐ Important Points

  • Faster than bisection
  • Division by zero must be handled
  • Requires function continuity
  • Good initial values improve convergence

⬆️ Get Back to Contents


📘 Newton–Raphson Method

🔍 Overview

The Newton–Raphson Method is a powerful root-finding technique that uses the function and its derivative to quickly converge to the root.


🧠 Core Concepts

  • Requires derivative of the function
  • Uses tangent line approximation
  • Very fast convergence (quadratic)
  • Sensitive to initial guess

⭐ Important Points

  • Fastest among common methods
  • Fails if f'(x) = 0
  • Requires differentiable functions
  • Widely used in engineering and science

⬆️ Get Back to Contents


📘 Fixed Point Iteration Method

🔍 Overview

The Fixed Point Iteration Method finds roots by rewriting the equation in the form
x = g(x) and repeatedly substituting values until convergence.


🧠 Core Concepts

  • Equation must be rearranged to x = g(x)
  • Iterative substitution is used
  • Convergence depends on derivative of g(x)

⭐ Important Points

  • Converges if
    |g'(x)| < 1
  • Simple but slow
  • Poor choice of g(x) causes divergence
  • Used as a foundation for advanced methods

⬆️ Get Back to Contents


📘 Synthetic Division

🔍 Overview

Synthetic Division is a shortcut method for dividing a polynomial by a linear divisor of the form (x - r)


🧠 Core Concepts

  • Efficient alternative to long division
  • Used to find quotient and remainder
  • Helps verify roots of polynomials

📐 Key Idea

If remainder = 0, then r is a root of the polynomial


⭐ Important Points

  • Works only for linear divisors
  • Faster and simpler than long division
  • Commonly used in root-finding problems
  • Very useful in numerical analysis

⬆️ Get Back to Contents


📘 Horner’s Method

📌 Overview

Horner’s Method is an efficient technique used to evaluate the value of a polynomial at a given input.
It simplifies polynomial computation by reducing the number of arithmetic operations required, making it faster and more reliable than direct evaluation.

This method is especially useful in computer programming and numerical computation where performance and accuracy are important.


🧠 Core Concepts

  • Polynomial evaluation
  • Algorithmic optimization
  • Use of sequential computation
  • Reduction of computational complexity
  • Efficient use of memory and arithmetic operations

💡 Key Idea

The key idea behind Horner’s Method is restructuring the polynomial evaluation process so that calculations are performed in a nested and sequential manner.

Instead of computing each power of the variable separately, the method:

  • Processes coefficients step by step
  • Reuses previous results
  • Minimizes repeated calculations

This leads to a more efficient and elegant solution.


⭐ Important Points to Know

  • Horner’s Method significantly reduces the number of multiplications required
  • It is faster than direct polynomial evaluation
  • The method works by evaluating the polynomial from highest degree to lowest
  • It improves numerical stability in computations
  • Time complexity is linear with respect to the degree of the polynomial
  • Commonly used in numerical analysis and algorithm design
  • Suitable for implementation using loops and arrays

⬆️ Get Back to Contents


📘 Lagrange Interpolation

📌 Overview

Lagrange Interpolation is a numerical method used to estimate the value of a function by constructing a polynomial that passes through a given set of data points.
It is particularly useful when the exact functional relationship between variables is unknown but discrete data values are available.

This method provides a direct and systematic way to interpolate values within a known range.


🧠 Core Concepts

  • Interpolation using known data points
  • Polynomial construction
  • Numerical approximation
  • Data-driven computation
  • Accuracy within a defined interval

💡 Key Idea

The key idea of Lagrange Interpolation is to construct a single polynomial that exactly fits all the given data points.

Each data point contributes a component to the final polynomial such that:

  • The polynomial passes through every known point
  • No system of equations needs to be solved
  • The interpolation is based solely on available data values

This makes the method straightforward and easy to understand.


⭐ Important Points to Know

  • The interpolating polynomial passes exactly through all given points
  • No prior knowledge of the function is required
  • Best suited for a small number of data points
  • Accuracy decreases for higher-degree polynomials
  • Adding new data points requires recomputing the polynomial
  • Sensitive to rounding errors for large datasets
  • Most effective for interpolation within the given data range

⬆️ Get Back to Contents


📘 Newton’s Divided Difference Method

📌 Overview

Newton’s Divided Difference Method is a numerical interpolation technique used to estimate unknown values based on a set of known data points that may not be equally spaced.
It constructs an interpolating polynomial incrementally, making it flexible and efficient for practical applications.

This method is particularly useful when new data points are added dynamically.


🧠 Core Concepts

  • Interpolation with unequal intervals
  • Incremental polynomial construction
  • Divided difference tables
  • Data-based approximation
  • Numerical stability

💡 Key Idea

The key idea behind Newton’s Divided Difference Method is to build the interpolating polynomial step by step using divided differences.

Instead of forming the entire polynomial at once:

  • The polynomial is constructed incrementally
  • Each new data point adds a new term
  • Previously computed values are reused

This makes the method efficient and adaptable.


⭐ Important Points to Know

  • Works for both equally and unequally spaced data points
  • Allows easy addition of new data points
  • Requires less recomputation compared to some other methods
  • More efficient than direct interpolation techniques
  • Suitable for table-based computation
  • Accuracy depends on the number and quality of data points
  • Commonly implemented using arrays or tables

⬆️ Get Back to Contents


📘 Newton’s Forward Difference Method

📌 Overview

Newton’s Forward Difference Method is a numerical interpolation technique used to estimate unknown values of a function when the data points are equally spaced.
It is especially effective when the required value lies near the beginning of the data table.

This method is widely used in numerical analysis due to its systematic structure and ease of computation.


🧠 Core Concepts

  • Interpolation with equally spaced data
  • Forward difference tables
  • Incremental polynomial approximation
  • Numerical estimation
  • Table-based computation

💡 Key Idea

The key idea of Newton’s Forward Difference Method is to use forward differences to progressively build an interpolating polynomial.

The method:

  • Starts from the first data point
  • Uses successive differences to approximate the function
  • Builds the solution step by step

This structured approach makes the method efficient and easy to implement.


⭐ Important Points to Know

  • Applicable only when data points are equally spaced
  • Best suited for estimating values near the beginning of the table
  • Requires construction of a forward difference table
  • Accuracy improves with more data points
  • Less efficient for values far from the starting point
  • Sensitive to data irregularities
  • Commonly used in numerical computation problems

⬆️ Get Back to Contents


📘 Newton’s Backward Difference Method

📌 Overview

Newton’s Backward Difference Method is a numerical interpolation technique used to estimate unknown values of a function when the data points are equally spaced.
It is particularly effective when the value to be interpolated lies near the end of the data table.

This method complements Newton’s Forward Difference Method and is widely used in numerical analysis.


🧠 Core Concepts

  • Interpolation using equally spaced data
  • Backward difference tables
  • Polynomial approximation
  • Numerical estimation
  • Table-based computation

💡 Key Idea

The key idea behind Newton’s Backward Difference Method is to construct the interpolating polynomial using backward differences taken from the last data point.

The method:

  • Starts interpolation from the end of the data table
  • Uses successive backward differences
  • Builds the polynomial incrementally

This approach improves accuracy for values near the last known data point.


⭐ Important Points to Know

  • Applicable only for equally spaced data points
  • Best suited for interpolation near the end of the table
  • Requires construction of a backward difference table
  • More accurate than forward method for values near the last data point
  • Accuracy improves with more data points
  • Sensitive to data spacing errors
  • Often compared with forward difference method in exams

⬆️ Get Back to Contents


📘 Linear Least Square Method

📌 Overview

The Linear Least Square Method is a numerical technique used to find the best-fit straight line for a given set of experimental or observed data points.
When data does not lie perfectly on a straight line, this method helps determine a line that represents the overall trend of the data with minimum error.


🧠 Core Concept

The core idea of the Linear Least Square Method is to minimize the total error between the observed values and the values predicted by a straight-line model.

Instead of passing exactly through every data point, the method finds a line that balances the deviations in such a way that the sum of squared errors becomes as small as possible.


💡 Key Idea

  • Real-world data often contains errors, noise, or irregularities
  • A straight line is assumed to represent the relationship between variables
  • The method finds the line that gives the closest possible approximation
  • Errors above and below the line are treated equally by squaring them
  • The resulting line is called the best-fit line

⭐ Important Points to Know

  • Used when the relationship between variables is approximately linear
  • Works best when the number of data points is more than two
  • Based on the principle of error minimization
  • Helps in prediction and trend analysis
  • Commonly used in engineering experiments and statistical studies
  • Simple, efficient, and easy to implement
  • Forms the foundation for advanced regression techniques

⬆️ Get Back to Contents


📘 Polynomial Regression

📌 Overview

Polynomial Regression is a numerical and statistical technique used to model non-linear relationships between variables.
When data points do not follow a straight-line trend, polynomial regression helps fit a smooth curved line that better represents the underlying pattern in the data.


🧠 Core Concept

The core idea of polynomial regression is to approximate a set of data points using a polynomial curve of suitable degree.
Instead of forcing linear behavior, the method allows the curve to bend, making it more flexible in representing complex data trends.

The best-fit curve is obtained by minimizing the overall error between observed data and predicted values.


💡 Key Idea

  • Real-world data is often non-linear
  • A polynomial curve is assumed to represent the data pattern
  • Higher-degree terms allow the curve to adjust more accurately
  • The curve does not pass through all points but follows the overall trend
  • Errors are minimized using the least square principle

⭐ Important Points to Know

  • Used when linear regression is not sufficient
  • Degree of polynomial depends on data behavior
  • Higher degree improves fitting but may cause overfitting
  • Requires more computations than linear regression
  • Widely used for curve fitting and approximation
  • Sensitive to extreme data values
  • Provides better accuracy for curved data trends

⬆️ Get Back to Contents


📘 Exponential Regression

📌 Overview

Exponential Regression is a numerical technique used to model data that shows rapid growth or decay.
It is especially useful when changes in the dependent variable increase or decrease at a proportional rate rather than a constant rate.


🧠 Core Concept

The core idea of exponential regression is to represent data using an exponential curve that closely follows the observed trend.
Since exponential behavior is non-linear in nature, the data is transformed into a form that allows the use of least square approximation.

The final curve provides a smooth and realistic representation of growth or decay patterns.


💡 Key Idea

  • Used for data showing exponential growth or decay
  • The rate of change depends on the current value
  • Data transformation helps simplify the fitting process
  • Least square principle is applied after transformation
  • The resulting curve captures natural growth trends

⭐ Important Points to Know

  • Suitable for population growth and decay models
  • Requires all observed values to be positive
  • More sensitive to small errors in data
  • Provides better results for rapidly changing data
  • Commonly used in forecasting and prediction
  • Plays an important role in scientific modeling
  • Forms the basis for many advanced growth models

⬆️ Get Back to Contents


📘 Maxima and Minima of a Tabulated Function

📌 Overview

Maxima and Minima analysis deals with identifying the highest and lowest points of a function based on given tabulated data.
This technique is useful when the function is not available in analytical form and only discrete data values are known.


🧠 Core Concept

The core concept behind finding maxima and minima of a tabulated function is based on the rate of change of the data.
By examining how the function values increase or decrease between successive points, we can locate points where the trend changes direction.

Such change indicates the presence of a maximum or minimum value.


💡 Key Idea

  • Maxima and minima occur where the trend of data changes direction
  • Increasing to decreasing trend indicates a maximum
  • Decreasing to increasing trend indicates a minimum
  • Finite differences are used to analyze the variation
  • Works effectively for uniformly spaced data

⭐ Important Points to Know

  • Applicable only when data is given in tabular form
  • Assumes data points are closely and evenly spaced
  • Accuracy depends on the quality of data
  • Used when the explicit function is unknown
  • Simple and computationally efficient
  • Commonly applied in engineering experiments
  • Helps in identifying critical points

⬆️ Get Back to Contents


📘 Derivative Using Forward Difference

📌 Overview

Forward Difference Method is a numerical technique used to approximate the first derivative of a function using forward neighboring data points. It is mainly applied when the derivative is required at the beginning of a data table.


🧠 Core Concept

The method is based on replacing the derivative definition with a finite difference expression using values ahead of the point of interest. It works effectively when data points are uniformly spaced.


💡 Key Idea

  • Uses values ahead of the target point
  • Suitable for the initial point of a dataset
  • Based on finite difference approximation
  • Accuracy improves with smaller step size
  • Simple and easy to compute

⭐ Important Points to Know

  • Requires equally spaced data
  • Less accurate than central difference
  • Error depends on step size
  • Common in numerical differentiation
  • Used in engineering computations
  • Best for boundary points
  • Forms the basis of higher-order formulas

⬆️ Get Back to Contents


📘 Derivative Using Backward Difference

📌 Overview

Backward Difference Method approximates the first derivative using values behind the given point. It is commonly used at the end of a data table.


🧠 Core Concept

The derivative is approximated using finite differences of previous data values. It is particularly useful when forward values are unavailable.


💡 Key Idea

  • Uses values behind the target point
  • Suitable for the last point of data
  • Based on backward finite difference
  • Works for uniformly spaced values
  • Easy implementation

⭐ Important Points to Know

  • Accuracy depends on spacing
  • Less accurate than central difference
  • Commonly used at boundaries
  • Simple computational method
  • Widely applied in numerical analysis
  • Useful in practical data problems
  • Step size affects error

⬆️ Get Back to Contents


📘 Derivative Using Central Difference

📌 Overview

Central Difference Method provides a more accurate approximation of the derivative by using data points on both sides of the target point.


🧠 Core Concept

It averages the forward and backward differences, giving better accuracy compared to one-sided methods.


💡 Key Idea

  • Uses both previous and next values
  • More accurate than forward/backward
  • Works for equally spaced data
  • Error is smaller compared to one-sided methods
  • Balanced approximation

⭐ Important Points to Know

  • Not suitable at boundary points
  • Requires uniform spacing
  • Higher accuracy
  • Common in physics simulations
  • Used in numerical modeling
  • Error decreases with smaller step size
  • Preferred for interior points

⬆️ Get Back to Contents


📘 Derivative Using Forward Divided Difference

📌 Overview

Forward Divided Difference method estimates derivatives when data points are not equally spaced.


🧠 Core Concept

It uses Newton’s forward divided difference interpolation formula and differentiates it to obtain the derivative.


💡 Key Idea

  • Works for unequal spacing
  • Based on interpolation polynomial
  • Suitable near the beginning
  • Flexible for irregular data
  • More general approach

⭐ Important Points to Know

  • Computationally intensive
  • Requires divided difference table
  • Useful in real-world data
  • More flexible than finite difference
  • Applied in interpolation problems
  • Handles non-uniform intervals
  • Accuracy depends on polynomial degree

⬆️ Get Back to Contents


📘 Derivative Using Backward Divided Difference

📌 Overview

Backward Divided Difference method approximates derivatives for unequally spaced data near the end of the dataset.


🧠 Core Concept

Based on Newton’s backward divided difference interpolation polynomial.


💡 Key Idea

  • Suitable for irregular spacing
  • Used near the end of table
  • Derived from interpolation formula
  • Flexible approach
  • Useful for real data

⭐ Important Points to Know

  • Requires construction of divided difference table
  • Handles non-uniform intervals
  • Slightly complex calculations
  • Useful in applied mathematics
  • More adaptable than simple finite differences
  • Error depends on interpolation order
  • Important in curve fitting

⬆️ Get Back to Contents


📘 Trapezoidal Rule

📌 Overview

Trapezoidal Rule is a numerical integration method that approximates the area under a curve using trapezoids.


🧠 Core Concept

The region under the curve is divided into trapezoidal sections and their areas are summed.


💡 Key Idea

  • Approximates area using trapezoids
  • Simple numerical integration method
  • Suitable for small intervals
  • Works for tabulated data
  • Linear approximation

⭐ Important Points to Know

  • Requires function values at endpoints
  • Moderate accuracy
  • Easy to implement
  • Error decreases with smaller intervals
  • Widely used in engineering
  • Applicable to definite integrals
  • Basis for composite rule

⬆️ Get Back to Contents


📘 Composite Trapezoidal Rule

📌 Overview

Composite Trapezoidal Rule improves accuracy by dividing the interval into multiple subintervals.


🧠 Core Concept

Applies trapezoidal rule repeatedly over smaller intervals.


💡 Key Idea

  • Divides interval into equal parts
  • Improves accuracy
  • Suitable for large intervals
  • Reduces error
  • Works for tabulated values

⭐ Important Points to Know

  • Requires equally spaced intervals
  • Accuracy increases with more subintervals
  • Simple extension of basic rule
  • Useful in scientific computation
  • Applied in numerical integration
  • Error proportional to step size squared
  • Computationally efficient

⬆️ Get Back to Contents


📘 Simpson 1/3 Rule

📌 Overview

Simpson’s 1/3 Rule approximates integrals using parabolic arcs instead of straight lines.


🧠 Core Concept

Fits a second-degree polynomial between three consecutive points.


💡 Key Idea

  • Uses quadratic approximation
  • More accurate than trapezoidal
  • Requires even number of intervals
  • Suitable for smooth curves
  • Based on interpolation

⭐ Important Points to Know

  • Needs equally spaced data
  • Requires odd number of data points
  • Higher accuracy
  • Common in physics and engineering
  • Error proportional to fourth derivative
  • Efficient numerical integration
  • Preferred over trapezoidal rule

⬆️ Get Back to Contents


📘 Composite Simpson 1/3 Rule

📌 Overview

Applies Simpson’s 1/3 Rule over multiple subintervals for better accuracy.


🧠 Core Concept

Divides the interval into even number of subintervals and applies the rule repeatedly.


💡 Key Idea

  • Improved precision
  • Even number of intervals required
  • Quadratic fitting in each pair
  • Efficient and accurate
  • Suitable for smooth data

⭐ Important Points to Know

  • Requires uniform spacing
  • More accurate than composite trapezoidal
  • Widely used
  • Error reduces significantly
  • Efficient for practical problems
  • Used in numerical computation
  • Important integration method

⬆️ Get Back to Contents


📘 Simpson 3/8 Rule

📌 Overview

Simpson’s 3/8 Rule approximates integration using cubic polynomials.


🧠 Core Concept

Fits a third-degree polynomial through four consecutive points.


💡 Key Idea

  • Uses cubic approximation
  • Requires multiple of three intervals
  • More accurate for certain functions
  • Polynomial-based integration
  • Alternative to 1/3 rule

⭐ Important Points to Know

  • Requires equal spacing
  • Slightly more computation
  • Useful for specific interval counts
  • Good for smooth functions
  • Error depends on fourth derivative
  • Applied in engineering mathematics
  • Extension of Simpson methods

⬆️ Get Back to Contents


📘 Composite Simpson 3/8 Rule

📌 Overview

Extends Simpson’s 3/8 Rule across multiple sets of three subintervals.


🧠 Core Concept

Applies cubic approximation repeatedly for improved accuracy.


💡 Key Idea

  • Multiple of three intervals
  • Higher accuracy
  • Suitable for long intervals
  • Works with uniform spacing
  • Efficient integration method

⭐ Important Points to Know

  • Requires structured interval division
  • Good precision
  • Slightly complex
  • Useful in scientific problems
  • Reduces approximation error
  • Alternative to composite 1/3
  • Based on cubic fitting

⬆️ Get Back to Contents


📘 Gauss Elimination

📌 Overview

Gauss Elimination is a direct method used to solve systems of linear equations.


🧠 Core Concept

Transforms the coefficient matrix into upper triangular form using row operations.


💡 Key Idea

  • Forward elimination
  • Back substitution
  • Systematic row operations
  • Efficient for small systems
  • Deterministic solution method

⭐ Important Points to Know

  • Requires non-zero pivot elements
  • Sensitive to rounding errors
  • Simple algorithm
  • Widely used in linear algebra
  • Basis for advanced methods
  • Computational cost increases with size
  • Used in engineering analysis

⬆️ Get Back to Contents


📘 Gauss Elimination with Pivoting

📌 Overview

An improved version of Gauss Elimination that reduces numerical instability.


🧠 Core Concept

Swaps rows to place the largest pivot element in position.


💡 Key Idea

  • Improves numerical stability
  • Reduces rounding error
  • Uses partial pivoting
  • More reliable
  • Preferred for large systems

⭐ Important Points to Know

  • Prevents division by small numbers
  • Increases accuracy
  • Slightly more computation
  • Common in scientific computing
  • Essential for ill-conditioned systems
  • Enhances reliability
  • Standard practical approach

⬆️ Get Back to Contents


📘 Gauss Jordan

📌 Overview

Gauss Jordan Method solves linear systems by converting matrix into reduced row echelon form.


🧠 Core Concept

Eliminates variables both above and below pivot positions.


💡 Key Idea

  • Direct solution without back substitution
  • Converts matrix to identity form
  • Systematic row operations
  • Suitable for small systems
  • Efficient for inversion

⭐ Important Points to Know

  • More computation than Gauss elimination
  • Used for matrix inversion
  • Produces exact reduced form
  • Useful in theoretical work
  • Sensitive to rounding
  • Straightforward algorithm
  • Common academic method

⬆️ Get Back to Contents


📘 Matrix Inversion with Gauss Jordan

📌 Overview

Uses Gauss Jordan elimination to compute the inverse of a matrix.


🧠 Core Concept

Augments matrix with identity matrix and performs row operations until identity is obtained on left.


💡 Key Idea

  • Augmented matrix method
  • Converts left to identity
  • Right becomes inverse
  • Direct inversion technique
  • Based on row operations

⭐ Important Points to Know

  • Matrix must be non-singular
  • Computationally intensive
  • Useful in solving systems
  • Accuracy depends on pivoting
  • Common in linear algebra
  • Applied in engineering
  • Important numerical method

⬆️ Get Back to Contents


📘 Doolittle LU Decomposition

📌 Overview

Doolittle Method decomposes a matrix into Lower (L) and Upper (U) triangular matrices.


🧠 Core Concept

The diagonal elements of L are taken as 1 and system is solved in two stages.


💡 Key Idea

  • Factorization method
  • Simplifies repeated solutions
  • Efficient for multiple right-hand sides
  • Structured approach
  • Used in linear systems

⭐ Important Points to Know

  • Requires non-singular matrix
  • Reduces computation effort
  • Useful in engineering simulations
  • Stable for well-conditioned matrices
  • Common in numerical methods
  • Saves time for repeated problems
  • Basis for advanced decompositions

⬆️ Get Back to Contents


📘 Cholesky Method

📌 Overview

Cholesky Method is used to solve linear systems where the matrix is symmetric and positive definite.


🧠 Core Concept

Decomposes matrix into product of a lower triangular matrix and its transpose.


💡 Key Idea

  • Applicable to special matrices
  • Efficient and stable
  • Reduces computation
  • Uses square roots
  • Faster than LU

⭐ Important Points to Know

  • Matrix must be symmetric positive definite
  • More efficient than general LU
  • Widely used in optimization
  • Less storage required
  • Numerically stable
  • Important in engineering problems
  • Common in scientific computing

⬆️ Get Back to Contents


📘 Jacobi Iteration

📌 Overview

Jacobi Method is an iterative method for solving systems of linear equations.


🧠 Core Concept

Each variable is computed using values from the previous iteration.


💡 Key Idea

  • Iterative approach
  • Requires initial guess
  • Suitable for diagonally dominant matrices
  • Simple computation
  • Convergence depends on matrix

⭐ Important Points to Know

  • Slow convergence
  • Easy to implement
  • Works well for large sparse systems
  • Parallelizable
  • Requires convergence condition
  • Used in numerical simulations
  • Important iterative technique

⬆️ Get Back to Contents


📘 Gauss Seidel

📌 Overview

Gauss Seidel Method improves upon Jacobi by using updated values immediately.


🧠 Core Concept

Uses newly calculated values within the same iteration step.


💡 Key Idea

  • Faster convergence than Jacobi
  • Iterative method
  • Requires initial approximation
  • Works for diagonally dominant matrices
  • Efficient for large systems

⭐ Important Points to Know

  • Convergence condition required
  • Sensitive to initial guess
  • Widely used iterative solver
  • Efficient memory usage
  • Common in PDE solving
  • Faster than Jacobi
  • Important numerical method

⬆️ Get Back to Contents


📘 ODE Using Taylor’s Method

📌 Overview

Taylor’s Method solves ordinary differential equations using Taylor series expansion.


🧠 Core Concept

Expands the function in series form around a point.


💡 Key Idea

  • Uses higher-order derivatives
  • Accurate method
  • Requires derivative computation
  • Series-based approach
  • Suitable for small intervals

⭐ Important Points to Know

  • Complex for higher orders
  • Accurate for smooth functions
  • Computationally intensive
  • Requires derivative expressions
  • Used in theoretical problems
  • High precision possible
  • Basis for advanced solvers

⬆️ Get Back to Contents


📘 ODE Using Picard’s Method

📌 Overview

Picard’s Method is an iterative technique for solving ODEs.


🧠 Core Concept

Uses successive approximations based on integral form.


💡 Key Idea

  • Iterative refinement
  • Converges to exact solution
  • Based on integral equation
  • Theoretical importance
  • Used for proof of existence

⭐ Important Points to Know

  • Slow convergence
  • Useful in analysis
  • Requires initial function
  • Provides approximate solutions
  • Rarely used for computation
  • Important in theory
  • Foundation for iterative methods

⬆️ Get Back to Contents


📘 ODE Using Euler’s Method

📌 Overview

Euler’s Method is the simplest numerical method for solving ODEs.


🧠 Core Concept

Uses tangent line approximation to advance step by step.


💡 Key Idea

  • First-order method
  • Simple and fast
  • Requires initial value
  • Step-by-step calculation
  • Low accuracy

⭐ Important Points to Know

  • Error proportional to step size
  • Easy implementation
  • Suitable for learning
  • Less accurate
  • Basis for improved methods
  • Used in simple models
  • Computationally inexpensive

⬆️ Get Back to Contents


📘 ODE Using Heun’s Method

📌 Overview

Heun’s Method is an improved Euler method using average slope.


🧠 Core Concept

Calculates predictor and corrector slopes.


💡 Key Idea

  • Predictor-corrector method
  • More accurate than Euler
  • Uses average slope
  • Second-order accuracy
  • Simple improvement

⭐ Important Points to Know

  • Requires two slope evaluations
  • Better stability
  • Moderate accuracy
  • Used in practical problems
  • Improved convergence
  • Efficient method
  • Common in engineering

⬆️ Get Back to Contents


📘 ODE Using Runge-Kutta Method

📌 Overview

Runge-Kutta Methods provide high-accuracy solutions to ODEs without computing higher derivatives.


🧠 Core Concept

Uses weighted average of slopes at different points within interval.


💡 Key Idea

  • Fourth-order method common
  • High accuracy
  • No higher derivatives needed
  • Stable and reliable
  • Widely used

⭐ Important Points to Know

  • More computation per step
  • Very accurate
  • Standard method in practice
  • Suitable for nonlinear equations
  • Efficient and stable
  • Common in simulations
  • Foundation for advanced solvers

⬆️ Get Back to Contents


📘 Shooting Method

📌 Overview

Shooting Method solves boundary value problems by converting them into initial value problems.


🧠 Core Concept

Guesses missing initial conditions and iteratively adjusts them.


💡 Key Idea

  • Converts BVP to IVP
  • Requires iteration
  • Uses ODE solver internally
  • Suitable for second-order equations
  • Trial-and-error approach

⭐ Important Points to Know

  • Requires good initial guess
  • May face convergence issues
  • Common in physics
  • Used in heat transfer problems
  • Iterative technique
  • Combines root-finding methods
  • Effective for smooth problems

⬆️ Get Back to Contents


📘 Elliptic Partial Differential Equation

📌 Overview

Elliptic PDEs describe steady-state phenomena such as heat distribution.


🧠 Core Concept

No time dependence; solution depends on boundary conditions.


💡 Key Idea

  • Represents equilibrium state
  • No time variable
  • Requires boundary conditions
  • Common in Laplace equation
  • Smooth solutions

⭐ Important Points to Know

  • Used in steady heat flow
  • Boundary value problems
  • Requires numerical discretization
  • Finite difference method common
  • Important in engineering
  • Stable solutions
  • Fundamental PDE type

⬆️ Get Back to Contents


📘 Poisson Equation

📌 Overview

Poisson Equation is a type of elliptic PDE with a source term.


🧠 Core Concept

Extends Laplace equation by including a forcing function.


💡 Key Idea

  • Includes source term
  • Used in electrostatics
  • Models heat with sources
  • Boundary value problem
  • Requires numerical methods

⭐ Important Points to Know

  • Special case of elliptic PDE
  • Widely used in physics
  • Solved using finite difference or finite element
  • Requires boundary conditions
  • Important in potential theory
  • Applied in engineering
  • Fundamental PDE model

✨ From theory to computation — mastering the art of numerical analysis. ✨

⬆️ Get Back to Contents

About

This repository covers essential numerical methods, including root finding, interpolation, regression, curve fitting, and maxima–minima analysis, aimed at helping students understand both theory and practical applications in engineering and applied mathematics.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages