A production-ready, cloud-deployed web application for brain tumor detection using deep learning. Features a medical-grade scanning interface, real-time predictions, and comprehensive analytics dashboard.
- Frontend: https://brain-tumor-mcug.onrender.com/
- API Docs: https://brain-tumor-api-utwn.onrender.com
- Health Check: https://brain-tumor-api-utwn.onrender.com/health
- Medical-Grade Scanning Interface: 6-stage scanning visualization with real-time progress
- 97.9% Accuracy: State-of-the-art CNN model trained on brain MRI dataset
- Instant Results: ~300ms prediction time with confidence scores
- Detailed Analysis: Probability breakdown for Tumor/Healthy classification
- Interactive Visualizations: Training history, accuracy, loss, AUC metrics
- Performance Analytics: Confusion matrix, precision, recall, F1-score
- Real-time Charts: Built with Chart.js and Plotly for dynamic data exploration
- Downloadable Reports: Export metrics and predictions as CSV
- Paginated Image Browser: View 678+ test images with smooth navigation
- Advanced Filtering: Filter by prediction result (correct/incorrect)
- Search Capability: Find images by filename or prediction
- Prediction Overlays: Each image shows its prediction and confidence
- API Key System: Secure endpoints with authentication
- Rate Limiting: Prevent abuse with configurable limits
- Live Testing: Built-in API tester with/without authentication
- Code Examples: Python, JavaScript, Node.js, cURL examples in 8 formats
- Cloud Deployed: Auto-deploy to Render with GitHub Actions
- Docker Support: Containerized for easy local/cloud deployment
- CORS Configured: Secure cross-origin resource sharing
- Health Monitoring: Built-in health checks and logging
- Responsive Design: Works seamlessly on desktop, tablet, and mobile
| Metric | Value |
|---|---|
| Validation Accuracy | 97.9% |
| AUC Score | 0.997 |
| Precision | 98.7% |
| Recall | 97.4% |
brain_tumor/
βββ backend/ # FastAPI backend application
β βββ routers/ # API route handlers
β β βββ predict.py # Image prediction endpoints
β β βββ metrics.py # Training metrics endpoints
β β βββ gallery.py # Image gallery endpoints
β βββ services/ # Business logic services
β β βββ model_service.py # ML model handling
β β βββ data_service.py # Data processing
β βββ main.py # FastAPI application entry point
β βββ config.py # Configuration management
β βββ requirements.txt # Python dependencies
β βββ .env.example # Environment variables template
β βββ .gitignore
β
βββ frontend/ # React frontend application
β βββ src/
β β βββ components/ # React components
β β β βββ Layout/ # Main layout component
β β β βββ ImageUpload/ # Image upload component
β β β βββ Charts/ # Chart components
β β β βββ CodeBlock/ # Code display component
β β βββ pages/ # Page components
β β β βββ HomePage.jsx
β β β βββ PredictPage.jsx
β β β βββ DashboardPage.jsx
β β β βββ GalleryPage.jsx
β β β βββ AboutPage.jsx
β β βββ services/ # API service layer
β β β βββ api.js
β β βββ App.jsx # Main App component
β β βββ main.jsx # Application entry point
β βββ package.json # Node.js dependencies
β βββ vite.config.js # Vite configuration
β βββ index.html
β
βββ model/ # Trained model files
β βββ final_brain_tumor_model_97.keras
β
βββ model_training_phase/ # Training data and logs
β βββ training_history.csv
β βββ training_history_2.csv
β βββ model_predictions.csv
β
βββ image/ # Test images
β βββ test_image.zip
β
βββ colab_code/ # Jupyter notebooks
β βββ brain_tumor.ipynb
β
βββ setup_backend.sh # Backend setup script
βββ setup_frontend.sh # Frontend setup script
βββ docker-compose.yml # Docker orchestration
βββ README.md # This file
- FastAPI: Modern, fast web framework for building APIs
- TensorFlow/Keras: Deep learning framework
- Uvicorn: ASGI server
- Pydantic: Data validation
- Pillow: Image processing
- Pandas: Data manipulation
- React 18: UI library with hooks
- Material-UI: Component library
- Chart.js: Data visualization
- Axios: HTTP client
- React Router: Navigation
- Vite: Build tool
- Python: 3.11 or higher
- Node.js: 16 or higher
- npm: 7+ or yarn
- RAM: 4GB minimum (8GB recommended for model loading)
- Browser: Chrome, Firefox, Safari, or Edge (latest versions)
- Git: For version control
- GitHub Account: To host your repository
- Render Account: Free tier available at render.com
- Git: To push code to GitHub
Deploy to the cloud in 10 minutes with automatic GitHub deployments:
π Follow the Complete Deployment Guide
Quick summary:
- Push code to GitHub
- Connect repository to Render
- Configure environment variables
- Deploy backend and frontend
- Set up GitHub Actions for auto-deploy
Benefits:
- β Always live (24/7 with paid tier)
- β Auto-deploy on git push
- β Free SSL certificates
- β Global CDN for frontend
- β Automatic scaling
Perfect for testing and development:
Backend:
cd backend
chmod +x ../setup_backend.sh
../setup_backend.shFrontend:
cd frontend
chmod +x ../setup_frontend.sh
../setup_frontend.shBackend:
-
Create and activate virtual environment:
cd backend python -m venv venv # On macOS/Linux: source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install --upgrade pip pip install -r requirements.txt
-
Configure environment:
cp .env.example .env # Edit .env if needed -
Run the backend:
python main.py
The API will be available at:
- API: http://localhost:8000
- Interactive docs: http://localhost:8000/api/docs
- Alternative docs: http://localhost:8000/api/redoc
Frontend:
-
Install dependencies:
cd frontend npm install -
Start development server:
npm run dev
The application will be available at http://localhost:3000
-
Build for production:
npm run build npm run preview
Run everything with one command:
# Build and start containers
docker-compose up --build
# Or run in background
docker-compose up -d --build
# Stop containers
docker-compose downAccess the application:
- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
- API Docs: http://localhost:8000/api/docs
Docker benefits:
- β Consistent environment
- β Easy dependency management
- β One-command startup
- β Isolated from system
Once deployed or running locally:
- Swagger UI:
https://brain-tumor-api-utwn.onrender.com/api/docs(orhttp://localhost:8000/api/docs) - ReDoc:
https://brain-tumor-api-utwn.onrender.com/api/redoc
| Endpoint | Method | Description | Auth |
|---|---|---|---|
/api/predict/ |
POST | Predict single image | Optional |
/api/predict/batch |
POST | Predict multiple images | Optional |
/api/predict/model-info |
GET | Get model information | No |
| Endpoint | Method | Description |
|---|---|---|
/api/metrics/training-history |
GET | Get training metrics |
/api/metrics/performance-summary |
GET | Get performance summary |
/api/metrics/confusion-matrix |
GET | Get confusion matrix |
/api/metrics/download/training-history |
GET | Download training CSV |
/api/metrics/download/predictions |
GET | Download predictions CSV |
| Endpoint | Method | Description |
|---|---|---|
/api/gallery/images |
GET | Get paginated images |
/api/gallery/image/{path} |
GET | Get specific image |
/api/gallery/stats |
GET | Get gallery statistics |
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Health check |
/api/info |
GET | API configuration |
/api/keys/verify |
POST | Verify API key |
import requests
# Predict single image
url = "https://your-api.onrender.com/api/predict/" # or http://localhost:8000/api/predict/
with open('brain_mri.jpg', 'rb') as f:
files = {'file': ('brain_mri.jpg', f, 'image/jpeg')}
response = requests.post(url, files=files)
result = response.json()
print(f"Prediction: {result['data']['prediction']}")
print(f"Confidence: {result['data']['confidence']:.2f}%")
print(f"Probabilities: {result['data']['all_predictions']}")
# With API key (if required)
headers = {'X-API-Key': 'your-api-key-here'}
response = requests.post(url, files=files, headers=headers)import axios from 'axios';
const predictImage = async (file) => {
const formData = new FormData();
formData.append('file', file);
try {
const response = await axios.post(
'https://your-api.onrender.com/api/predict/',
formData,
{
headers: {
'Content-Type': 'multipart/form-data',
// 'X-API-Key': 'your-api-key-here' // Optional
}
}
);
console.log('Prediction:', response.data.data.prediction);
console.log('Confidence:', response.data.data.confidence);
console.log('Probabilities:', response.data.data.all_predictions);
} catch (error) {
console.error('Error:', error.response?.data || error.message);
}
};# Without API key
curl -X POST "https://your-api.onrender.com/api/predict/" \
-H "accept: application/json" \
-H "Content-Type: multipart/form-data" \
-F "file=@brain_mri.jpg"
# With API key
curl -X POST "https://your-api.onrender.com/api/predict/" \
-H "accept: application/json" \
-H "Content-Type: multipart/form-data" \
-H "X-API-Key: your-api-key-here" \
-F "file=@brain_mri.jpg"const axios = require('axios');
const FormData = require('form-data');
const fs = require('fs');
async function predictImage(imagePath) {
const formData = new FormData();
formData.append('file', fs.createReadStream(imagePath));
const response = await axios.post(
'https://your-api.onrender.com/api/predict/',
formData,
{
headers: {
...formData.getHeaders(),
// 'X-API-Key': 'your-api-key-here' // Optional
}
}
);
return response.data;
}
// Usage
predictImage('./brain_mri.jpg')
.then(result => {
console.log('Prediction:', result.data.prediction);
console.log('Confidence:', result.data.confidence);
})
.catch(error => console.error('Error:', error.message));# Application
APP_NAME="Brain Tumor Detection API"
APP_VERSION="1.0.0"
DEBUG=True
# Server
HOST=0.0.0.0
PORT=8000
# CORS (add your frontend URL)
ALLOWED_ORIGINS=["http://localhost:3000", "http://localhost:5173"]
# Model
MODEL_PATH=../model/final_brain_tumor_model_97.keras
MODEL_INPUT_SIZE=224
CONFIDENCE_THRESHOLD=0.5
# Data
TRAINING_HISTORY_PATH=../model_training_phase/training_history.csv
TRAINING_HISTORY_2_PATH=../model_training_phase/training_history_2.csv
MODEL_PREDICTIONS_PATH=../model_training_phase/model_predictions.csv
TEST_IMAGES_PATH=../image/test_image
# Upload
MAX_UPLOAD_SIZE=10485760 # 10MB
ALLOWED_EXTENSIONS=[".jpg", ".jpeg", ".png"]
# Logging
LOG_LEVEL=INFODEBUG=False
PORT=10000
ALLOWED_ORIGINS=["https://your-frontend.onrender.com"]
MODEL_PATH=./model/final_brain_tumor_model_97.kerasVITE_API_URL=http://localhost:8000VITE_API_URL=https://your-backend.onrender.comNote: Update URLs after deployment. See DEPLOYMENT.md for details.
The model was trained on a brain MRI dataset with the following configuration:
- Architecture: Convolutional Neural Network (CNN)
- Input Size: 224x224x3 (RGB images)
- Optimizer: Adam with learning rate scheduling
- Loss Function: Binary Crossentropy
- Metrics: Accuracy, AUC, Precision, Recall
- Training Epochs: 25 (with early stopping)
- Batch Size: 32
- Data Augmentation: Rotation, flip, zoom, shift
Training notebooks are available in colab_code/brain_tumor.ipynb.
Model not loading:
- Ensure the model file exists at the path specified in
.env - Check that you have enough RAM (model requires ~2GB)
- Verify TensorFlow is installed correctly
Port already in use:
# Change port in .env or use:
PORT=8001 python main.pyAPI connection errors:
- Verify backend is running on http://localhost:8000
- Check CORS settings in backend
.env - Ensure proxy settings in
vite.config.js
Build errors:
# Clear cache and reinstall
rm -rf node_modules package-lock.json
npm installContributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Dataset providers
- TensorFlow and Keras teams
- FastAPI and React communities
- Material-UI contributors
For issues, questions, or suggestions:
- Open an issue on GitHub
- Email: your.email@example.com
This project is licensed under the MIT License - see the LICENSE file for details.
- Brain MRI dataset providers
- TensorFlow and Keras teams for the ML framework
- FastAPI and React communities for excellent documentation
- Material-UI contributors for beautiful components
- Render for easy cloud deployment
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Issues: GitHub Issues
- Documentation: Full Docs
- Quick Start: Quick Start Guide
This is a research and educational project.
This application is intended for:
- Educational purposes
- Research demonstrations
- Machine learning portfolio projects
- Technical proof of concept
NOT intended for:
- Clinical diagnosis
- Medical decision making
- Patient care
- Production medical use
For any clinical application, this system would require:
- Regulatory approval (FDA, CE marking, etc.)
- Clinical validation studies
- HIPAA compliance implementation
- Professional medical oversight
- Proper medical device classification
Always consult qualified healthcare professionals for medical diagnosis and treatment.