We release patches for security vulnerabilities for the following versions:
| Version | Supported |
|---|---|
| 1.0.x | ✅ |
| < 1.0 | ❌ |
We take the security of Custom Image Classifier seriously. If you believe you have found a security vulnerability, please report it to us as described below.
- Do not open a public GitHub issue for security vulnerabilities
- Do not disclose the vulnerability publicly until it has been addressed
- Email the maintainers directly at: [your-security-email@example.com]
- Provide detailed information, including:
- Type of vulnerability
- Full paths of source file(s) related to the vulnerability
- Location of the affected source code (tag/branch/commit or direct URL)
- Step-by-step instructions to reproduce the issue
- Proof-of-concept or exploit code (if possible)
- Impact of the issue, including how an attacker might exploit it
- Acknowledgment: We will acknowledge receipt of your vulnerability report within 48 hours
- Assessment: We will assess the vulnerability and determine its impact and severity
- Timeline: We aim to provide an initial response within 5 business days
- Resolution: We will work to address confirmed vulnerabilities as quickly as possible
- Disclosure: Once the vulnerability is fixed, we will:
- Release a security advisory
- Credit you in the advisory (if desired)
- Notify users to update
- Security vulnerability is reported privately
- Maintainers investigate and confirm the issue
- A fix is developed and tested
- A new version is released with the security fix
- A security advisory is published
- Users are notified to update
- Maximum file size: 500MB (configurable in
app.py) - Allowed file types: PNG, JPG, JPEG, GIF, BMP, ZIP
- File validation: Basic extension checking
- Recommendation: Deploy behind a reverse proxy with additional file validation
- Untrusted models: Do not load model files from untrusted sources
- Model poisoning: Trained models can be maliciously crafted
- Recommendation: Only use models you have trained yourself or from trusted sources
- Authentication: Currently not implemented
- Rate limiting: Currently not implemented
- Recommendation: Deploy behind an API gateway with proper authentication and rate limiting for production use
- CSRF protection: Currently limited
- XSS protection: Basic sanitization in place
- Recommendation: Use proper authentication and HTTPS in production
# In app.py
app.run(debug=True, host='localhost', port=5000)-
Disable Debug Mode
DEBUG = False # In config.py
-
Use a Production WSGI Server
pip install gunicorn gunicorn -w 4 -b 0.0.0.0:5000 app:app
-
Use HTTPS
# Use nginx or Apache as reverse proxy with SSL/TLS -
Add Authentication
# Implement Flask-Login or similar -
Add Rate Limiting
# Use Flask-Limiter or similar -
Validate All Inputs
# Add comprehensive input validation -
Set Secure Headers
# Use Flask-Talisman or similar
- Debug mode disabled
- Using production WSGI server (gunicorn, uWSGI)
- HTTPS enabled
- Authentication implemented
- Rate limiting enabled
- Input validation comprehensive
- Secure headers set
- File upload validation robust
- Logs monitored regularly
- Dependencies up to date
- Security updates applied
We use the following tools to keep dependencies secure:
- Dependabot: Automated dependency updates
- Safety: Python dependency security scanner
- pip-audit: Audit Python packages for known vulnerabilities
Run security audit:
# Install safety
pip install safety
# Run audit
safety check
# Or use pip-audit
pip install pip-audit
pip-auditAll user inputs should be validated:
from werkzeug.utils import secure_filename
# Sanitize filenames
filename = secure_filename(user_provided_filename)
# Validate project names
if not re.match(r'^[a-zA-Z0-9_-]+$', project_name):
raise ValueError("Invalid project name")import os
from pathlib import Path
# Prevent directory traversal
project_dir = Path('projects') / secure_filename(project_name)
if not project_dir.resolve().is_relative_to(Path('projects').resolve()):
raise ValueError("Invalid path")# Use parameterized queries
# Don't use string concatenation for SQL# Be careful loading untrusted models
# torch.load uses pickle which can execute arbitrary code
# Only load models you trust
# Safer alternative (when available):
# Use torch.jit.load for TorchScript models# Validate image files
# Don't trust file extensions
# Use libraries to verify file contentFor non-security issues, please use:
- Bugs: GitHub Issues
- Questions: GitHub Discussions
Thank you for helping keep Custom Image Classifier secure! 🔒