Automatically generate and update markdown documentation based on your benchmark results and solution statistics.
The markdown generation system creates beautiful, organized documentation of your Advent of Code solutions, including:
- Main README: Overview table showing all years at a glance
- Year-specific files: Detailed performance statistics for each year
- Day-specific files: Comprehensive run history for individual days
All documentation is generated from benchmark data stored in the tracking database, ensuring accuracy and consistency.
python main.py --update-markdown --markdown-allThis updates:
- Main README overview table
- All year-specific results files (in
docs/) - Links everything together
When you run benchmarks with --benchmark-publish, markdown files are automatically updated:
# Benchmark and auto-update documentation
python main.py 2025 1 --benchmark --benchmark-publishUpdate only the main README.md overview table:
python main.py --update-markdownUpdate a specific year's detailed results file:
# Update docs/2025-results.md and main README
python main.py --update-markdown --markdown-year 2025Update documentation after running a specific day:
# Updates day results, year results, and main README
python main.py 2025 1 --update-markdownUpdate everything at once:
python main.py --update-markdown --markdown-allThe main README contains an overview table showing:
- Year (linked to year-specific file)
- Total stars earned
- Problems solved
- Total runs and success rate
- Average execution time
- Fastest and slowest times
Example:
## π Solutions Overview
| Year | Stars β | Problems π§© | Runs π | Success Rate | Avg Time β‘ | Fastest π | Slowest π |
|------|----------|-------------|---------|--------------|-------------|------------|------------|
| [2025](./docs/2025-results.md) | 6 | 6 | 18 | 83.3% | 147.7ms | 556.5ΞΌs | 771.9ms |
| [2015](./docs/2015-results.md) | 50 | 50 | 100 | 91.0% | 633.8ms | 0.2ΞΌs | 11.54s |Each year file includes:
- Year summary statistics
- System/Hardware information (OS, Python version, CPU details)
- Performance by day table (with star indicators)
- Performance distribution (fast/medium/slow)
- Links back to main README
Example structure:
# π Advent of Code 2025 Results
[β Back to Overview](../README.md)
## Year Summary
- β **Stars**: 6
- π§© **Problems Solved**: 6
- π **Total Runs**: 18 (83.3% success)
- β‘ **Average Time**: 147.7ms
...
## Performance by Day
| Day | Part 1 | Part 2 | Total | Status |
|-----|--------|--------|-------|--------|
| 1 | 556.5ΞΌs | 750.7ΞΌs | 1.3ms | ββ |
...Day-specific files show:
- Summary for each part (best time, result)
- Complete run history
- Links to year results
Note: Day-specific results are currently generated separately and not created by default.
The recommended workflow for maintaining up-to-date documentation:
# Benchmark specific day with database publishing
python main.py 2025 1 --benchmark --benchmark-publish
# Benchmark entire year with auto-update
python main.py --benchmark-year 2025 --benchmark-publish
# Benchmark everything and update all docs
python main.py --benchmark-all --benchmark-publishBenefits:
- β Automatic documentation updates
- β Accurate performance data
- β No manual maintenance needed
Update documentation separately from benchmarking:
# Run benchmarks with saving
python main.py 2025 1 --benchmark --benchmark-save
# Later, manually update markdown
python main.py --update-markdown --markdown-allUpdate documentation after normal solution runs (uses tracked data):
# Run solutions (with tracking enabled)
python main.py 2025 1
# Update markdown when ready
python main.py --update-markdown --markdown-year 2025The markdown generator pulls data from the tracking database (aoc_tracking.db):
- Best times: Fastest successful run for each part
- Run counts: Total number of attempts
- Success rates: Percentage of successful runs
- Results: Actual answers for completed problems
To have data for markdown generation, you need runs in the database:
-
Run solutions normally (creates tracked runs):
python main.py 2025 1
-
Benchmark and publish (creates benchmark runs):
python main.py 2025 1 --benchmark --benchmark-publish
-
Sync from AOC website (for completed problems):
python main.py --sync 2025
Times are automatically formatted for readability:
- Microseconds:
556.5ΞΌs(< 1ms) - Milliseconds:
1.3ms,147.7ms(< 1s) - Seconds:
2.18s,11.54s(β₯ 1s)
Solutions are categorized by speed:
- π Fast: < 10ms
- β‘ Medium: 10ms - 1s
- π Slow: β₯ 1s
All generated files include a "Last updated" timestamp showing when they were generated.
Year-specific result files automatically include system information to provide context for benchmark results:
## π» System Information
- **OS**: Windows 11
- **Python**: 3.12.10
- **Processor**: Intel64 Family 6 Model 183 Stepping 1, GenuineIntel
- **CPU Cores**: 24This information is collected automatically when markdown files are generated and helps with:
- Comparing performance across different machines
- Understanding the context of benchmark results
- Documenting the environment for reproducibility
To view your system's hardware information:
python -c "from utils.hardware_info import format_hardware_info; print(format_hardware_info())"Or use the demo script:
python utils/demo_hardware_info.pyUpdate documentation after major benchmarking sessions:
python main.py --benchmark-year 2025 --benchmark-runs 25 --benchmark-publishThe --benchmark-publish flag ensures automatic markdown updates.
Set up a routine for keeping docs updated:
# Weekly: Re-benchmark everything with more runs
python main.py --benchmark-all --benchmark-runs 10 --benchmark-publishFor quick documentation updates without re-running benchmarks:
python main.py --update-markdown --markdown-allCommit generated markdown files to track your progress:
git add README.md docs/*-results.md
git commit -m "Update performance documentation"If markdown shows "No tracked data available":
- Verify the tracking database exists:
aoc_tracking.db - Run some solutions to populate data
- Or use
--benchmark-publishto create benchmark entries
If running update commands but seeing no changes:
- Check that tracking is enabled (don't use
--no-tracking) - Verify there's actual data in the database
- Check file permissions for README.md and docs/ directory
Ensure the directory structure matches expectations:
- Main README.md at project root
- Year results in
docs/{year}-results.md - Day results in
{year}/day{day}-results.md
Complete workflow from solving to documentation:
# 1. Solve the problem
python main.py 2025 1
# 2. Benchmark for accurate performance data
python main.py 2025 1 --benchmark --benchmark-runs 25 --benchmark-publish
# 3. Documentation is auto-updated!
# View the results:
cat README.md
cat docs/2025-results.md
# 4. Commit to version control
git add README.md docs/2025-results.md
git commit -m "Add day 1 solution with benchmarks"- CLI Reference - Complete command documentation
- Benchmarking - Performance testing guide
- Tracking - Database and tracking system
- Statistics - Statistics generation
The markdown generation system provides:
β
Automatic documentation from benchmark results
β
Clean, organized structure with overview and detailed views
β
Easy maintenance with simple CLI commands
β
Integration with benchmarking for automatic updates
β
Beautiful formatting with emoji indicators and tables
Keep your Advent of Code documentation up-to-date effortlessly!