This project demonstrates a Smart Traffic Management System powered by Reinforcement Learning (RL). The system simulates traffic patterns in a custom environment and trains an RL agent to optimize traffic light timings, reducing congestion and improving traffic flow.
- Goal: Minimize traffic congestion by dynamically controlling traffic lights using an RL-based approach.
- Environment: A custom traffic environment simulates road intersections with varying levels of congestion.
- Model: The RL agent is based on Proximal Policy Optimization (PPO), implemented using Stable-Baselines3.
- Visualization: Real-time traffic data is visualized to evaluate the agent's performance.
- Custom Gym Environment: Simulates traffic flow, representing congestion at four intersections.
- Dynamic Traffic Patterns: Includes simulated rush hours with varying vehicle inflows.
- RL Optimization: PPO agent learns to manage traffic lights for optimal flow.
- Performance Metrics: Tracks cumulative rewards and congestion levels over time.
- Visualization: Plots traffic trends to illustrate system improvements.
- Reinforcement Learning: PPO effectively learns policies to reduce congestion during high traffic hours.
- Scalability: The environment can be extended to simulate larger road networks.
- Real-Time Application: Can serve as a foundation for deploying AI-driven traffic systems in smart cities.
The system generates plots for:
- Cumulative Rewards: Showing how the agent improves over time.
- Traffic Flow Trends: Demonstrating the reduction of congestion across intersections.
- Integrate real-world traffic data for more realistic simulations.
- Expand the model to handle multi-lane and multi-direction intersections.
- Add support for SUMO (Simulation of Urban Mobility) for detailed simulations.