This project tackles the critical issue of reduced visibility in adverse weather conditions, focusing on image deraining for autonomous driving. Rain obstructs the visibility of road features, leading to potential safety hazards for perception and decision-making systems in autonomous vehicles. This project employs non-learning-based techniques to enhance visibility in real-time without relying on extensive training datasets, ensuring adaptability and efficiency in diverse and challenging rain scenarios.
Autonomous driving systems depend heavily on clear and accurate visual data. Rain introduces challenges that include:
- Visibility Obstruction: Rain streaks, water droplets, and splashes distort the camera view, obscuring road markings and obstacles.
- Dynamic Weather Conditions: Variations in rain intensity, angles, and environments make it difficult to generalize solutions.
- Latency Requirements: Autonomous systems require low-latency processing to make split-second decisions. Many learning-based approaches fail to meet these real-time demands.
Deep learning models have been widely used for image enhancement, but they face limitations:
- Dependence on Large Training Datasets:
- Collecting diverse datasets for all possible rain scenarios is impractical.
- Generalization to unseen rain conditions remains a challenge.
- High Computational Overhead:
- Learning-based models are computationally intensive, leading to increased latency.
- Real-time applications, such as autonomous driving, demand faster processing.
- Adaptability:
- Non-learning-based approaches offer adaptability without retraining, making them suitable for diverse and dynamic environments.
This project proposes a non-learning-based deraining system using classical image processing techniques and frequency-domain transformations to:
- Improve Visibility: Remove rain streaks and enhance image clarity for autonomous driving perception systems.
- Ensure Real-Time Performance: Provide low-latency processing suitable for real-world applications.
- Enhance Generalization: Adapt to diverse and unseen rain conditions without the need for training datasets.
- Rain streaks and obstructions are identified and removed using:
- Edge detection
- Frequency filtering
- Contrast enhancement
- Transforming images into the frequency domain allows precise separation of rain artifacts from useful image features.
- The processing pipeline includes:
- Rain Detection: Identifies rain streaks using edge-enhanced filtering techniques.
- Artifact Removal: Applies frequency-based methods to selectively remove rain artifacts.
- Image Restoration: Enhances contrast and restores image clarity while preserving essential details.
- Python Implementation:
- All algorithms are implemented in Python for simplicity and flexibility.
- Real-Time Focus:
- Designed to meet the low-latency demands of autonomous driving systems.
- No Dependency on Training Datasets:
- Completely independent of deep learning models or pre-trained networks.
.
├── data/ # Example input images
├── derain_filter.py # Comprehensive deraining pipeline
├── filters.py # Core filter implementations
├── main.py # Main script for processing
├── output/ # Intermediate output images
├── README.md # Project overview and instructions
├── requirements.txt # Required libraries
├── results/ # Example output images
├── sols.py # Solution functions
└── utils.py # Utility functions
-
Clone the repository:
git clone https://github.com/bob020416/Image_Deraining_For_Autonomous_Driving.git
-
Navigate to the project directory:
cd Image_Deraining_For_Autonomous_Driving -
Install required dependencies:
pip install -r requirements.txt
-
Run the main script with a test image:
python main.py --input data/test_image.jpg --output results/derained_image.jpg
- Extend the pipeline to handle other adverse weather conditions like snow and fog.
- Optimize the algorithms for deployment on edge devices used in autonomous vehicles.
- Explore hybrid approaches combining classical and lightweight learning-based techniques for enhanced performance.

