A research prototype exploring data-driven volatility estimation for ATM European call option pricing on the AUD/HUF currency pair.
Classical option pricing models such as Black-Scholes-Merton (BSM) assume constant volatility — an assumption that consistently fails to capture real-world phenomena such as volatility clustering, fat tails, and regime changes in financial time series.
This project investigates whether a hybrid, data-driven approach to volatility estimation can produce more accurate option prices than traditional closed-form methods. The hybrid framework integrates:
- GARCH modeling to capture time-varying volatility and autocorrelation in squared returns
- LSTM forecasting to learn non-linear temporal patterns in volatility dynamics
- Monte Carlo simulation to price options under the estimated volatility regime
The central hypothesis is that a statistically and computationally richer volatility input will yield pricing estimates closer to empirically observed market behavior than constant-volatility benchmarks.
Option prices are computed under the risk-neutral measure Q, where the underlying asset follows:
Here,
This project uses historical volatility forecasting (from realized returns) rather than backing out implied volatility from market option prices. This is a deliberate methodological choice: the goal is to test whether statistical and deep learning models applied to the underlying asset's return series can approximate fair value without relying on observed derivatives markets.
Monte Carlo paths are generated under the risk-neutral measure using the hybrid
- Historical AUD/HUF exchange rate data sourced via
yfinance - Log returns computed and analyzed for stationarity and heteroskedasticity
| Model | Approach |
|---|---|
| Black-Scholes-Merton | Closed-form analytical solution under constant-σ assumption |
| CRR Binomial Tree | Discrete-time lattice approximation of BSM |
| Vanilla Monte Carlo | GBM simulation with historical constant volatility |
Raw Returns → GARCH(1,1) Volatility Estimates
↓
Sequence Formation (lookback window)
↓
LSTM Volatility Forecast (σ̂_t)
↓
Monte Carlo Path Simulation (risk-neutral)
↓
Discounted Payoff → Option Price
Conditional variance equation:
Fitted via maximum likelihood to capture volatility clustering in AUD/HUF log returns.
- Input: Rolling window of GARCH-estimated volatilities
- Architecture: Stacked LSTM layers with dropout regularization
-
Output: One-step-ahead volatility forecast
$\hat{\sigma}_{t+1}$ - Training: Mean Squared Error loss, Adam optimizer
- Simulates
$N$ risk-neutral paths using the LSTM-forecast volatility schedule - Prices ATM European call option as discounted average payoff across all paths
Results below are based on a representative in-sample evaluation window. Values may vary with different data periods or random seeds.
| Model | Estimated Price | MAE vs. BSM Benchmark |
|---|---|---|
| Black-Scholes-Merton | baseline | — |
| CRR Binomial Tree | ≈ BSM | < 0.001 |
| Vanilla Monte Carlo | ≈ BSM | < 0.005 |
| Hybrid (LSTM-GARCH-MC) | diverges from BSM | reflects volatility dynamics |
Note: The hybrid model is not expected to converge to BSM — it intentionally prices under a richer volatility regime. Divergence from BSM is a feature, not a bug, and reflects the model's sensitivity to time-varying volatility clustering.
Key observations:
- GARCH captures significant ARCH effects in AUD/HUF returns (confirmed via Ljung-Box test on squared residuals)
- LSTM successfully learns the temporal structure of GARCH volatility sequences
- Hybrid Monte Carlo prices reflect elevated volatility during high-uncertainty regimes relative to constant-σ models
This project is a research prototype. The following limitations apply:
- Historical vs. implied volatility: The model uses realized return-based volatility rather than market-implied volatility, which may underperform in options markets with rich term structure information.
- Single currency pair: Results are specific to AUD/HUF. Generalizability to other underlyings has not been validated.
- No transaction costs: The pricing framework assumes frictionless markets.
- Constant risk-free rate: Interest rate is held fixed; no stochastic rate modeling (e.g., Hull-White) is applied.
- No regime detection: The model does not explicitly identify structural breaks or volatility regimes.
- LSTM overfitting risk: With limited financial time series data, deep learning models are prone to overfitting without careful cross-validation.
1. Clone the repository
git clone https://github.com/your-username/hybrid-option-pricing.git
cd hybrid-option-pricing2. Install dependencies
pip install numpy pandas matplotlib yfinance scipy arch tensorflow scikit-learn3. Launch the notebook
jupyter notebook finance_project_hybrid.ipynb| Package | Purpose |
|---|---|
numpy, pandas |
Numerical computing and data handling |
matplotlib |
Visualization |
yfinance |
Market data acquisition |
scipy |
Statistical utilities |
arch |
GARCH model estimation |
tensorflow |
LSTM neural network |
scikit-learn |
Data preprocessing and scaling |
hybrid-option-pricing/
│
├── finance_project_hybrid.ipynb # Main notebook
├── README.md # This file
└── requirements.txt # (optional) dependency list
- Extend to implied volatility surface fitting using market option chains
- Incorporate stochastic volatility models (Heston, SABR) as additional benchmarks
- Apply to equity options (e.g., S&P 500) for broader generalizability
- Explore Transformer-based architectures for volatility forecasting
- Add Greeks estimation (Delta, Vega) via Monte Carlo finite differences
This project is released under the MIT License.
Developed as a quantitative finance research prototype exploring hybrid deep learning and statistical methods for derivative pricing.