Understanding the organization and architecture of the Advent of Code Python solution runner.
AdventOfCodePython/
├── main.py # Main entry point and CLI
├── input.py # Input fetching and caching
├── tracking.py # Performance tracking system
├── submitter.py # AOC answer submission
├── benchmark.py # Comprehensive benchmarking
├── benchmark_quick.py # Quick benchmark presets
├── requirements.txt # Python dependencies
├── session_cookie.txt # Your AOC session cookie (private)
├── aoc_tracking.db # SQLite tracking database
├── README.md # Project documentation
├── docs/ # Detailed documentation
│ ├── README.md # Documentation index
│ ├── setup.md # Installation guide
│ ├── cli-reference.md # CLI documentation
│ ├── solution-guide.md # How to write solutions
│ ├── tracking.md # Performance tracking docs
│ ├── benchmarking.md # Benchmarking documentation
│ ├── database-publishing.md # Database integration
│ ├── statistics.md # Statistics generation
│ └── project-structure.md # This file
├── YYYY/ # Year directories (e.g., 2015/, 2025/)
│ ├── day1.py # Solution files
│ ├── day2.py
│ └── ...
├── input/ # Input file storage
│ └── YYYY/ # Year subdirectories
│ ├── day1.txt # Actual problem input
│ ├── day1_sample.txt # Sample input (optional)
│ └── ...
└── __pycache__/ # Python bytecode cache
Purpose: Main command-line interface and orchestration
Key Functions:
main()- Entry point and argument parsingrun_solution()- Execute solutions with timing and trackinghandle_stats()- Statistics generationhandle_benchmarking()- Benchmark execution
Dependencies: All other modules
Purpose: Handle input fetching, caching, and processing
Key Functions:
get_input()- Fetch input from cache or AOCget_sample_input()- Load sample input filesfetch_input_from_aoc()- Download input from AOC website
Features:
- Automatic caching to avoid re-downloading
- Sample input file support
- Inline sample input handling
- Session cookie validation
Purpose: Record and analyze solution performance over time
Key Classes:
PerformanceTracker- Main tracking interface- Database schema management
- Performance comparison logic
Key Functions:
record_run()- Store execution resultsget_performance_comparison()- Compare with previous runsget_recent_history()- Retrieve run historygenerate_statistics()- Create performance summaries
Database Tables:
runs- Individual execution recordssubmissions- AOC submission attemptsproblems- Problem metadata and best times
Purpose: Submit solutions to Advent of Code website
Key Functions:
submit_answer()- Submit answer with rate limitingparse_response()- Parse AOC response messagesstore_submission()- Record submission attempts
Features:
- Rate limiting compliance
- Duplicate answer prevention
- Response parsing and storage
- Correct answer caching
Purpose: Comprehensive solution performance analysis
Key Functions:
benchmark_solution()- Detailed single-solution benchmarkingbenchmark_day()- Both parts of a daybenchmark_year()- All solutions for a yearbenchmark_all()- Everything with timeout protection
Features:
- Statistical analysis (mean, median, std dev)
- Warm-up runs for accuracy
- Timeout protection for slow solutions
- Database publishing integration
Purpose: Convenient benchmark presets
Presets:
fast- 3 runs, quick feedbacknormal- 10 runs, standard measurementthorough- 25 runs, detailed analysis- Year-level presets for batch benchmarking
Each solution file follows this pattern:
from typing import Any
def solve_part_1(input_data: str) -> Any:
"""Solve part 1 of the challenge."""
# Your solution code here
pass
def solve_part_2(input_data: str) -> Any:
"""Solve part 2 of the challenge."""
# Your solution code here
passRequirements:
- Exact function names:
solve_part_1andsolve_part_2 - Single parameter:
input_data: str - Return any type (converted to string for display)
Solutions are loaded dynamically:
# main.py logic (simplified)
import importlib
def load_solution(year: int, day: int):
module_name = f"{year}.day{day}"
module = importlib.import_module(module_name)
return {
1: getattr(module, 'solve_part_1', None),
2: getattr(module, 'solve_part_2', None)
}This allows:
- No registration required - just create the file
- Template auto-generation for missing solutions
- Flexible solution organization
-
CLI Parsing (
main.py)- Parse command line arguments
- Validate year/day parameters
-
Input Handling (
input.py)- Check for cached input files
- Fetch from AOC if needed
- Handle sample input if requested
-
Solution Loading (
main.py)- Import solution module dynamically
- Extract part functions
- Generate template if missing
-
Execution & Timing (
main.py)- Execute solution functions
- Measure execution time
- Capture results and errors
-
Performance Tracking (
tracking.py)- Record run details in database
- Compare with previous runs
- Generate performance insights
-
Optional: Submission (
submitter.py)- Submit answers to AOC
- Handle response and rate limits
- Store submission results
-- Individual run records
CREATE TABLE runs (
id INTEGER PRIMARY KEY,
year INTEGER NOT NULL,
day INTEGER NOT NULL,
part INTEGER NOT NULL,
execution_time_ms REAL NOT NULL,
success BOOLEAN NOT NULL,
error_message TEXT,
timestamp TEXT NOT NULL,
code_hash TEXT,
input_hash TEXT,
is_sample BOOLEAN DEFAULT FALSE,
result TEXT
);
-- Submission tracking
CREATE TABLE submissions (
id INTEGER PRIMARY KEY,
year INTEGER NOT NULL,
day INTEGER NOT NULL,
part INTEGER NOT NULL,
answer TEXT NOT NULL,
response TEXT,
success BOOLEAN,
timestamp TEXT NOT NULL
);
-- Problem metadata
CREATE TABLE problems (
year INTEGER NOT NULL,
day INTEGER NOT NULL,
part1_best_time REAL,
part2_best_time REAL,
PRIMARY KEY (year, day)
);New CLI Commands:
- Add argument parsing in
main.py - Implement handler function
- Add to main dispatch logic
New Tracking Metrics:
- Extend database schema
- Update
PerformanceTrackerclass - Modify statistics generation
New Benchmarking Options:
- Add to
benchmark.py - Create quick presets in
benchmark_quick.py - Update CLI integration
You can create shared utilities:
# 2025/helpers.py
def parse_grid(input_data: str):
"""Common grid parsing logic."""
pass
# 2025/day5.py
from .helpers import parse_grid
def solve_part_1(input_data: str) -> int:
grid = parse_grid(input_data)
# ... rest of solution-
Environment Setup:
python -m venv venv source venv/bin/activate # or venv\Scripts\activate on Windows pip install -r requirements.txt
-
Database Initialization:
# Database is created automatically on first run python main.py 2025 1 # This will create the schema
-
Test with sample input:
python main.py 2025 1 --sample
-
Test tracking:
python main.py 2025 1 --history
-
Test benchmarking:
python benchmark_quick.py fast 2025 1
The project follows standard Python conventions:
- PEP 8 style guidelines
- Type hints where appropriate
- Docstrings for public functions
- Error handling with meaningful messages
requests>=2.25.0 # AOC communication
colorama>=0.4.0 # Terminal colors (optional)
sqlite3- Database operationsimportlib- Dynamic module loadingargparse- Command line parsinghashlib- Code change detectiontime- Performance measurementjson- Data serializationpathlib- File system operations
- Input caching: Input files are cached to disk, not kept in memory
- Database: SQLite provides efficient storage with minimal memory overhead
- Solution isolation: Each solution runs in isolation without persistent state
- Input caching: Avoids repeated network requests
- Database batching: Multiple operations can be batched for efficiency
- File system: Uses pathlib for cross-platform compatibility
- Warm-up runs: Account for Python startup and JIT effects
- Multiple samples: Statistical analysis for reliable measurements
- Timeout protection: Prevents hanging on inefficient solutions
- Read Setup Guide to get started
- Check Solution Writing Guide for best practices
- Explore CLI Reference for all available commands