A prototype traffic simulator implementing parallel processing techniques for improved performance.
- Basic parallel implementation using threads
- Grid-based spatial partitioning (simple version)
- Performance benchmarking system
- Visualization of results
- The prototype only implements grid-based partitioning
- Need to add quadtree implementation for comparison
- Need to add metrics for comparing partitioning methods (load imbalance, cross-partition movements)
- The diffusion model parallelization is not implemented yet
- Need to implement both synchronous and asynchronous versions
- Need to add accuracy metrics for comparing denoising methods
- Current benchmarks focus on execution time and CPU usage
- Need to add:
- Memory overhead measurements
- Load balancing metrics
- Cross-partition vehicle movement counting
- Accuracy comparisons with ground truth
- Current tests only go up to 1000 vehicles
- Proposal mentions testing with 10K, 50K, and 100K vehicles
- Need larger-scale test scenarios
- Clone the repository:
git clone https://gitlab.maastrichtuniversity.nl/project2-2-cs-hpc-2425/team02.git
cd LCSim- Install dependencies:
pip install -r requirements.txtRun the benchmark to test different parallelization strategies:
python examples/benchmark_parallel_sim.pyThis will:
- Test with different vehicle counts (10-1000)
- Compare sequential vs parallel execution
- Generate performance visualizations
- Save results as 'benchmark_results.png'
lcsim/- Main package directoryparallel/- Parallel implementation modulessequential/- Sequential implementation modules
examples/- Example scripts and benchmarkstests/- Test cases
The benchmark results show:
- Sequential processing is faster for small simulations (10-50 vehicles)
- Parallel processing with 4 threads gives optimal performance for larger simulations
- Maximum speedup of ~7% achieved with 1000 vehicles
The project includes a comprehensive benchmarking tool that allows you to compare the performance of different algorithms and configurations.
The unified_benchmark.py script provides a flexible way to benchmark all available simulation algorithms:
- Grid-based partitioning
- Quadtree-based partitioning
- Denoising methods (Kalman, Average)
# Run the default benchmark suite
python unified_benchmark.py
# Run a small benchmark (faster, fewer configurations)
python unified_benchmark.py --small
# Run a medium benchmark
python unified_benchmark.py --medium
# Run a large benchmark (more vehicles, more thread counts)
python unified_benchmark.py --large# Only test the best configuration of each method
python unified_benchmark.py --baseline-only
# Customize the benchmark parameters
python unified_benchmark.py --custom --sim-types parallel denoising --methods quadtree grid --threads 1 4 8 --vehicles 1000 5000 10000
# Debug mode with verbose output
python unified_benchmark.py --debugThe benchmark results are saved to the benchmark_results directory:
benchmark_results_final.csv: Raw benchmark databenchmark_results_parallel.png: Performance plots for parallel algorithmsbenchmark_results_denoising.png: Performance plots for denoising algorithmsbenchmark_results_comparison.png: Comparison between all algorithmsbenchmark_results_thread_scaling.png: Analysis of performance scaling with thread count
Presentation Midway Evaluation Demo: Benchmark code: python unified_benchmark.py --custom --sim-types parallel --methods serial quadtree grid denoising --threads 1 4 --vehicles 10000 50000 --steps 10 --runs 1
Visualization: python examples/animation.py