A Universal Plant Disease Detection System powered by Deep Learning (MobileNetV2).
LeafLens AI is an end-to-end machine learning solution that detects plant diseases across multiple species using a single, universal deep learning model. Built with a modern tech stack, this project demonstrates a complete ML pipeline from data preparation and model training to deployment as a production-ready web application.
-
Universal Model Architecture: A single MobileNetV2-based model capable of detecting diseases across multiple plant species (Apple, Cherry, Corn, Grape, Peach, Pepper, Potato, Tomato, and more), eliminating the need for species-specific models.
-
Confidence Threshold Safety: Implements a smart confidence threshold (70% default) that filters out non-plant images and low-confidence predictions, addressing the "Open World" problem in production ML systems.
-
Dynamic Model Loading: The backend dynamically loads model files and class names from JSON configuration, making it easy to update models without code changes.
- π³ Dockerized: One command to run the entire stack (backend + frontend)
- β‘ Real-time Inference: Fast predictions using optimized MobileNetV2 architecture
- π¨ Modern UI: Beautiful drag-and-drop interface with instant feedback and dark mode support
- π§ Smart Processing: Backend dynamically loads models and class names from JSON
- π Production-Ready: Comprehensive error handling, logging, and health checks
- π RESTful API: Well-documented FastAPI with automatic OpenAPI/Swagger documentation
- Docker and Docker Compose installed
- (Optional) Python 3.13+ and Node.js 18+ for local development
-
Clone the repository
git clone https://github.com/cauegrassi7/leaflens-ai.git cd leaflens-ai -
Start the application
docker-compose up --build
-
Access the application
- Frontend: http://localhost:3000
- API Documentation: http://localhost:8000/docs
- API Health Check: http://localhost:8000/health
The Docker setup automatically:
- Builds both backend and frontend containers
- Mounts the ML models directory for the backend
- Sets up proper networking between services
- Includes health checks and auto-restart policies
# Create and activate virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Run the backend server
uvicorn backend.app.main:app --reload --host 0.0.0.0 --port 8000cd frontend
# Install dependencies
npm install
# Run development server
npm run devleaflens-ai/
βββ backend/ # FastAPI backend application
β βββ app/
β β βββ api/ # API route handlers
β β βββ core/ # Configuration and settings
β β βββ schemas/ # Pydantic response models
β β βββ services/ # Business logic layer
β β βββ main.py # FastAPI application entry point
β βββ Dockerfile # Backend container definition
β βββ run.py # Backend startup script
β
βββ frontend/ # Next.js frontend application
β βββ src/
β β βββ app/ # Next.js app directory
β β βββ page.tsx # Main application page
β β βββ layout.tsx # Root layout component
β βββ Dockerfile # Frontend container definition
β βββ package.json # Node.js dependencies
β
βββ ml/ # Machine learning pipeline
β βββ data/ # Training and validation datasets
β β βββ raw/
β β βββ train/ # Training images (organized by class)
β β βββ val/ # Validation images (organized by class)
β βββ models/ # Trained model files
β β βββ plant_disease_model_vuniversal_v1.keras
β β βββ classes_vuniversal_v1.json
β βββ notebooks/ # Jupyter notebooks for exploration
β β βββ 1_data_exploration.ipynb
β β βββ 2_model_training.ipynb
β βββ scripts/
β βββ train.py # Model training script
β
βββ docker-compose.yml # Docker Compose configuration
βββ requirements.txt # Python dependencies
βββ README.md # This file
ml/: Contains the complete ML pipeline including data preprocessing, model training scripts, and trained model artifacts. The training pipeline uses TensorFlow/Keras to build a MobileNetV2-based classifier.backend/: FastAPI REST API that serves model predictions. Handles image preprocessing, model inference, and response formatting with proper error handling.frontend/: Next.js 16 application with TypeScript and Tailwind CSS. Provides an intuitive drag-and-drop interface for uploading plant images and displaying prediction results.
The intuitive drag-and-drop interface makes it easy to upload plant images for analysis.
Real-time disease detection with confidence scores and detailed information.
Root endpoint providing API metadata, version information, and available endpoints.
Health check endpoint. Returns system status and model loading state.
Response:
{
"status": "healthy",
"classes_count": 38
}Upload an image file to get a plant disease prediction.
Request: multipart/form-data with file field containing an image
Response (High Confidence):
{
"class_name": "Tomato___Bacterial_spot",
"confidence": 0.95
}Response (Low Confidence):
{
"class_name": "Unidentified",
"confidence": 0.45,
"low_confidence": true,
"message": "Model confidence (45%) was too low. The image may not be of a known plant."
}Interactive API documentation (Swagger UI)
Alternative API documentation (ReDoc)
- FastAPI: Modern, fast web framework for building APIs
- TensorFlow/Keras: Deep learning framework for model inference
- Pillow: Image processing library
- Uvicorn: ASGI server for FastAPI
- Next.js 16: React framework with App Router
- TypeScript: Type-safe JavaScript
- Tailwind CSS 4: Utility-first CSS framework
- Axios: HTTP client for API requests
- Lucide React: Modern icon library
- TensorFlow 2.18: Deep learning framework
- Keras 3.3+: High-level neural networks API
- MobileNetV2: Efficient CNN architecture for mobile/edge devices
- NumPy: Numerical computing
- Pandas: Data manipulation and analysis
- Docker: Containerization
- Docker Compose: Multi-container orchestration
This project is licensed under the terms specified in the LICENSE file.
Contributions are welcome! Please feel free to submit a Pull Request.
For questions or inquiries, please open an issue on GitHub.
Built with β€οΈ using Python, TensorFlow, FastAPI, and Next.js

