Skip to content

AlexFiliakov/Ergodic-Insurance-Limits

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

920 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ergodic Insurance Limits

What if the cheapest insurance strategy is the one that costs you the most?

Repo Banner

PyPI Documentation Status Ask DeepWiki

Traditional insurance analysis asks: "Does the expected recovery exceed the premium?" When it doesn't, the recommendation is to self-insure. This framework asks a different question: "Which strategy maximizes a single company's compound growth over time?" The answer turns out to be surprisingly different, and it explains why sophisticated buyers routinely pay premiums well above expected losses.

This is a Python simulation framework that applies ergodic economics (Ole Peters, 2019) to insurance optimization. It models a business over thousands of simulated timelines to find the insurance structure (retention, limits, layers) that maximizes long-term growth, not just minimizes short-term cost.

For the general introduction to this research and its business implications, see mostlyoptimal.com.


Why Ergodic Economics Matters for Insurance

If you're an actuary, you already understand ruin theory and geometric returns. Ergodic economics provides a unifying framework that connects these ideas to insurance purchasing decisions in a way that expected value analysis cannot.

The core issue is familiar: business wealth compounds multiplicatively. A 50% loss followed by a 50% gain doesn't bring you back to even. It leaves you at 75%. This is the volatility tax, and it means large losses destroy more long-term growth than their expected value suggests. Traditional analysis, which averages outcomes across many companies at a single point in time (the ensemble average), misses this entirely. What matters for any single company is the average outcome over time (the time average).

Insurance mitigates this volatility tax. Even when premiums exceed expected losses (sometimes significantly), the reduction in downside variance can result in higher compound growth. The framework precisely quantifies when and by how much.

Practically, this implies there exists an optimal insurance structure for a given risk profile where the growth benefit of variance reduction outweighs the cost of the premium. This framework finds it.

The formal relationship

For multiplicative wealth dynamics, the time-average growth rate is:

$$g = \lim_{T\to\infty}{\frac{1}{T}\ln{\frac{x(T)}{x(0)}}}$$

This is the geometric growth rate: the quantity that actually determines long-term outcomes for a single entity. Optimizing this rate, rather than the expected value $\mathbb{E}[x(T)]$, naturally balances profitability with survival and eliminates the need for arbitrary utility functions or risk preferences.

For a deeper treatment, see the theory documentation or Peters' original paper: The ergodicity problem in economics (Nature Physics, 2019).


What This Framework Does

flowchart LR
    MODEL["<b>Financial Model</b><br/>Widget Manufacturer<br/>Double-Entry Accounting<br/>Multi-Layer Insurance<br/>Stochastic Loss Processes"]

    SIM["<b>Simulation Engine</b><br/>Parallel Monte Carlo<br/>100K+ Paths<br/>Convergence Monitoring"]

    ERGODIC["<b>Ergodic Optimization</b><br/>Time-Average vs Ensemble<br/>8 Optimization Algorithms<br/>HJB Optimal Control<br/>Pareto Frontier Analysis"]

    OUTPUT["<b>Insights & Reports</b><br/>40+ Visualization Types<br/>VaR, TVaR, Ruin Metrics<br/>Walk-Forward Validation<br/>Excel & HTML Reports"]

    MODEL ==> SIM ==> ERGODIC ==> OUTPUT
    ERGODIC -.->|"Strategy Refinement"| MODEL

    classDef default fill:#f8f9fa,stroke:#dee2e6,stroke-width:2px,color:#212529
    classDef hero fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px,color:#1b5e20

    class ERGODIC hero
Loading

The framework models a widget manufacturer, a deliberately generic business entity inspired by economics textbooks, through a complete financial simulation with stochastic losses, multi-layer insurance, and double-entry accounting. (The widget manufacturer is the default; contributions to extend the model to other business types are welcome.)

Ergodic Analysis

  • Time-average vs ensemble-average growth: the core framework for evaluating insurance decisions
  • Scenario comparison with statistical significance testing (insured vs uninsured trajectories)
  • Convergence validation to ensure time-average estimates are reliable
  • Loss-integrated ergodic analysis connecting loss processes to growth rate impacts

Monte Carlo Simulation

  • Parallel Monte Carlo engine with convergence monitoring, checkpointing, and adaptive stopping
  • Bootstrap confidence intervals for ruin probability and key metrics
  • CPU-optimized parallel execution designed for budget hardware (4-8 cores, 100K+ simulations in <4GB RAM)

Financial Modeling

  • Widget manufacturer model with 75+ methods for revenue, expenses, and balance sheet management
  • Double-entry ledger with event-sourced accounting and trial balance generation
  • Full financial statements: balance sheets, income statements, cash flow statements with GAAP compliance
    • GAAP compliance is currently a sophisticated approximation, but needs a professional corporate accountant to review
  • Stochastic processes including geometric Brownian motion, mean-reversion, and lognormal volatility
  • Multi-year claim liability scheduling with actuarial development patterns and collateral tracking

Insurance Modeling

  • Multi-layer insurance programs with attachment points, limits, and reinstatement provisions
  • Market cycle-aware pricing (soft/normal/hard markets) with cycle transition simulation
    • Significant research work is needed on modeling insurance market cycles, contributors are welcome
  • Aggregate and per-occurrence limit tracking with layer utilization monitoring
  • Actuarial claim development patterns (standard, slow, fast) with cash flow projection

Optimization

  • 8 optimization algorithms — SLSQP, Differential Evolution, Trust Region, Penalty Method, Augmented Lagrangian, Multi-Start, and more
  • Business outcome optimizer — maximize ROE, minimize bankruptcy risk, optimize capital efficiency
  • HJB optimal control solver — stochastic control via Hamilton-Jacobi-Bellman PDE
  • Multi-objective Pareto frontier generation (weighted-sum, epsilon-constraint, evolutionary methods)

Risk Metrics & Validation

Sample Analytics: Walk-Forward Validation

  • Standard risk metrics — VaR, TVaR, Expected Shortfall, PML, maximum drawdown, economic capital
  • Ruin probability analysis with multi-horizon support and bootstrap confidence intervals
  • Walk-forward validation with out-of-sample testing across rolling windows
  • Strategy backtesting with pre-built strategies (conservative, aggressive, adaptive, optimized)

Visualization & Reporting

Sample Analytics: Optimal Insurance Configuration by Company Size

  • 40+ executive and technical plots — ROE-ruin frontiers, ruin cliffs, tornado diagrams, convergence diagnostics, Pareto frontiers
  • Interactive dashboards (Plotly-based) for exploration
  • Excel report generation with cover sheets, financial statements, metrics dashboards, and pivot data
  • 45+ Jupyter notebooks organized by topic for interactive analysis

Configuration

  • 3-tier architecture — profiles, modules, and presets with inheritance and dot-notation overrides
  • Industry-specific configs (manufacturing, service, retail) and market condition presets

Reproducible Research

Pareto Frontier

There is no single “right” deductible for a business.

When you frame insurance purchasing as multi-objective optimization, balancing long-term growth rate against bankruptcy risk, something interesting emerges:

The optimal deductible shifts significantly depending on how much weight the decision-maker places on growth versus safety. Sure, most retentions are strictly dominated (worse on every dimension simultaneously), but there is usually a wide range worth considering.

I built a Pareto frontier experiment for a fictional middle-market manufacturer ($5M assets, $10M revenue) using 50,000 Monte Carlo paths. The visualization below shows a decision surface: as you shift from pure ruin minimization (left) to pure growth maximization (right), the black line indicates where the optimal deductible lies.

The “right” retention isn’t a number; it’s a conversation about risk appetite.

Pareto Sensitivity

I built a Pareto frontier experiment for a fictional middle-market manufacturer ($5M assets, $10M revenue) using 5,000 Monte Carlo paths across 100 draws from a gamma-distributed large-loss variance. The visualization above shows a decision cloud: as you shift from a high-variability assumption (left) to nearly deterministic losses (right), a valley emerges:

It seems that at the two extremes of loss variability, optimal retention goes up. Why? My guess is that once the losses are nearly deterministic, you might as well retain them and not pay the premium fees. On the other hand, when losses are highly volatile, the impact around the expected value isn’t frequent enough to affect ruin or growth, so all you really need is tail protection.

The bottom line is the “right” retention isn’t a number; it’s a conversation about risk appetite and assumptions.

Hamilton-JAcobi-Bellman Optimization

This experiment demonstrates that HJB can be a viable mechanism for optimizing long-term retention strategies. While insurance policies are typically bound for one year, HJB facilitates the evaluation of multi-year strategies under dynamic conditions with realistic business constraints.

The main use case I see is evaluating the ROI of a given insurance program when static strategies don’t offer a nuanced long-term view. To make this evaluation more complete, framework enhancements are needed for more realistic insurance market cycles and carrier renewal negotiation strategies, which can be incorporated as additional simulation dynamics. Pull requests are welcome.

Volatility Optimization

In my next experiment, I explore optimal deductibles under two volatility assumptions: operational volatility and loss volatility. I set up a deductible optimizer with two objectives: maximize growth while minimizing growth volatility (did you think I'd run out of volatilities to analyze?).

I also added a constraint: risk of ruin cannot exceed 1%. To give us a nice fat tail, I set inverse Gaussian Bayesian priors on the volatility assumptions.

That's right: inverse Gaussian Bayesian priors on the loss Pareto alpha, plotted in Euclidean space with Lanczos interpolation. I hope that's enough name-dropping for one post.

No Hamilton-Jacobi-Bellman this time. I showed restraint.

Above is a heatmap of the results. The findings reinforce the last experiment, but the setup is more flexible, and honestly, it was just fun to build.

Blog Posts


Quick Start

Install

pip install ergodic-insurance

Requires Python 3.12+. For optional features: pip install ergodic-insurance[excel] (Excel reports).

Run Your First Analysis

from ergodic_insurance import run_analysis

results = run_analysis(
    initial_assets=10_000_000,
    loss_frequency=2.5,
    loss_severity_mean=1_000_000,
    deductible=500_000,
    coverage_limit=10_000_000,
    premium_rate=0.025,
    n_simulations=1000,
    time_horizon=20,
)
print(results.summary())   # human-readable comparison
results.plot()              # 2x2 insured-vs-uninsured chart
df = results.to_dataframe() # per-simulation metrics

Verify Installation

from ergodic_insurance import run_analysis

results = run_analysis(n_simulations=5, time_horizon=5, seed=42)
print(results.summary())
print("Installation successful!")

Explore Further

Notebook Topic
Setup Verification Confirm your environment works
Quick Start First simulation walkthrough
Ergodic Advantage Time-average vs ensemble-average demonstration
Monte Carlo Simulation Deep dive into the simulation engine
Risk Metrics VaR, TVaR, ruin probability analysis
Retention Optimization Finding optimal deductibles
HJB Optimal Control Theoretical optimal control benchmarks

See the full documentation or the Getting Started tutorial for more.


Professional Standards and Disclaimers

This framework provides actuarial research tools subject to ASOP No. 41: Actuarial Communications and ASOP No. 56: Modeling. Full compliance disclosures are in the Actuarial Standards Compliance document.

Research Use Only. This is an early-stage research tool. It does not constitute an actuarial opinion or rate filing. Outputs are intended for qualified actuaries who can independently validate the methodology and results.

Responsible Actuary: Alex Filiakov, ACAS. Review is ongoing; the responsible actuary does not currently take responsibility for the accuracy of the methodology or results.

Key Limitations & Disclosures
  • Outputs should not be used for regulatory filings, rate opinions, or reserve opinions without independent actuarial analysis.
  • Results are illustrative and depend on input assumptions. Treat them as directional guidance, not prescriptive recommendations.
  • The framework embeds simplifying assumptions (Poisson frequency, log-normal severity, no regulatory capital, deterministic margins) documented in the compliance disclosures.
  • Development involved extensive reliance on Large Language Models for research and code generation.
  • Conflict of Interest: The responsible actuary is employed by an insurance broker. See the compliance document for full disclosure and mitigation measures.

Contributing

This project is in active development (pre-1.0) and there is meaningful work to be done. Whether you're an experienced actuary who can stress-test the methodology or a developer who can tackle implementation issues, contributions are welcome.

Where to Start

  • Open Issues — 30 open issues spanning mathematical correctness, actuarial methodology, and security hardening. Many are well-scoped and self-contained.
  • Codebase Onboarding Guide — A structured walkthrough of the key concepts, domain terms, and architecture. Start here before diving into the code.
  • DeepWiki — AI-powered Q&A over the entire codebase. Useful for navigating 74 modules without reading all of them.

Areas Where Help Is Needed

Area Examples Good For
Mathematical correctness Variance corrections, bias adjustments, convergence estimators Actuaries, statisticians, quantitative researchers
Actuarial methodology Claim reserve re-estimation, development pattern calibration, bootstrap CI improvements Practicing actuaries, CAS/SOA candidates
New business models Extending beyond the widget manufacturer to service, retail, or other industry types Domain experts in other industries
Optimization & theory HJB solver improvements, new objective functions, multi-period strategies Applied mathematicians, operations researchers
Testing & validation Walk-forward validation, convergence diagnostics, edge case coverage Anyone comfortable with pytest

Developer Setup

git clone https://github.com/AlexFiliakov/Ergodic-Insurance-Limits.git
cd Ergodic-Insurance-Limits
python ergodic_insurance/scripts/setup_dev.py

This installs the package in editable mode with dev dependencies and configures pre-commit hooks (black, isort, mypy, pylint, conventional commits). Or manually:

pip install -e ".[dev]"
pre-commit install
pre-commit install --hook-type commit-msg

Running Tests

pytest                                              # all tests with coverage
pytest ergodic_insurance/tests/test_manufacturer.py  # specific module
pytest --cov=ergodic_insurance --cov-report=html     # HTML coverage report

Branch Strategy

  • main — stable releases only, protected
  • develop — integration branch, PRs go here
  • Use conventional commit messages (feat:, fix:, docs:, etc.) — this drives automated versioning

Project Structure

Ergodic-Insurance-Limits/
├── ergodic_insurance/              # Main Python package (74 modules)
│   ├── manufacturer.py            # Widget manufacturer financial model
│   ├── simulation.py              # Simulation orchestrator
│   ├── monte_carlo.py             # Parallel Monte Carlo engine
│   ├── ergodic_analyzer.py        # Time-average growth analysis
│   ├── insurance.py               # Insurance structures and layers
│   ├── insurance_program.py       # Multi-layer program management
│   ├── insurance_pricing.py       # Premium calculation models
│   ├── loss_distributions.py      # Statistical loss modeling (lognormal, Pareto, etc.)
│   ├── optimization.py            # Optimization algorithms and solvers
│   ├── business_optimizer.py      # Business outcome optimization
│   ├── hjb_solver.py              # Hamilton-Jacobi-Bellman optimal control
│   ├── pareto_frontier.py         # Multi-objective Pareto analysis
│   ├── risk_metrics.py            # VaR, TVaR, ruin probability
│   ├── financial_statements.py    # GAAP-compliant financial statements
│   ├── stochastic_processes.py    # GBM, mean-reversion, volatility models
│   ├── parallel_executor.py       # CPU-optimized parallel processing
│   ├── gpu_mc_engine.py           # GPU-accelerated Monte Carlo (CuPy)
│   ├── walk_forward_validator.py  # Walk-forward validation framework
│   ├── strategy_backtester.py     # Insurance strategy backtesting
│   ├── convergence.py             # Convergence diagnostics
│   ├── bootstrap_analysis.py      # Bootstrap statistical methods
│   ├── sensitivity.py             # Sensitivity analysis
│   ├── config/                    # 3-tier configuration system
│   │   ├── core.py                #   Config classes and validation
│   │   ├── presets.py             #   Market condition templates
│   │   └── ...                    #   Insurance, manufacturer, simulation configs
│   ├── reporting/                 # Report generation
│   │   ├── executive_report.py    #   Executive-level summaries
│   │   ├── technical_report.py    #   Technical analysis reports
│   │   ├── insight_extractor.py   #   Automated insight extraction
│   │   └── ...                    #   Excel, tables, scenario comparison
│   ├── visualization/             # Plotting (executive, technical, interactive)
│   ├── notebooks/                 # 45+ Jupyter notebooks
│   │   ├── getting-started/       #   Setup and first steps
│   │   ├── core/                  #   Loss distributions, insurance, ergodic advantage
│   │   ├── optimization/          #   Retention, Pareto, sensitivity, parameter sweeps
│   │   ├── advanced/              #   HJB control, walk-forward, convergence
│   │   ├── reconciliation/        #   10 validation and reconciliation notebooks
│   │   ├── visualization/         #   Dashboards, plots, scenario comparison
│   │   ├── reporting/             #   Report and table generation
│   │   └── research/              #   Exploratory research notebooks
│   ├── tests/                     # 60+ test modules
│   ├── examples/                  # Demo scripts
│   ├── data/config/               # YAML configuration profiles and presets
│   ├── docs/                      # Sphinx documentation (API, tutorials, theory)
│   └── scripts/                   # Setup and utility scripts
├── assets/                        # Images and visual resources
├── docs/                          # GitHub Pages documentation
├── .github/workflows/             # CI/CD pipelines
├── pyproject.toml                 # Project configuration and dependencies
├── CHANGELOG.md                   # Release history
└── LICENSE                        # MIT

License

MIT. See LICENSE.

About

Exploring Ergodicity Economics in Insurance

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors