This project demonstrates a secure CI/CD pipeline for AI and Large Language Model (LLM) applications using automated DevSecOps security controls.
The pipeline integrates multiple security scanners and LLM-specific tests to detect vulnerabilities, secrets, and prompt injection risks during development.
Secure LLM Pipeline is a project that automatically checks AI applications for security problems before they are released. It works like a safety inspection for software: every time new code is added, the system runs a series of automated checks to find risks such as insecure code, vulnerable software libraries, hidden passwords or keys, and attempts to trick AI models with malicious prompts. By running these checks continuously in the development pipeline, the project helps developers catch security issues early and build safer AI systems.
Bandit
- Detects insecure Python coding patterns
- Identifies security issues such as unsafe imports, subprocess misuse, and weak cryptography
Semgrep
- Advanced static analysis for security vulnerabilities
- Detects insecure patterns across application code
Safety
- Detects known vulnerabilities in Python dependencies
- Uses vulnerability databases to flag insecure packages
Trivy
- Comprehensive vulnerability scanner
- Detects OS and dependency vulnerabilities
Gitleaks
- Detects hard-coded secrets such as:
- API keys
- tokens
- passwords
- credentials
Prompt Injection Detection
- Detects malicious prompts attempting to:
- override system instructions
- reveal system prompts
- bypass AI safety controls
Example threats tested:
Ignore previous instructionsReveal system promptBypass safety restrictions
The security pipeline runs automatically using GitHub Actions on:
- code pushes
- pull requests
Security checks run automatically before code is merged.
Pipeline stages: Bandit Safety Semgrep Trivy Gitleaks Prompt Injection Test
secure-llm-pipeline │ ├── .github/workflows │ └── ai-security-pipeline.yml │ ├── llm_security_tests │ └── prompt_injection_test.py │ ├── README.md └── .gitignore
This project demonstrates practices aligned with:
- OWASP Top 10 for LLM Applications
- AI DevSecOps
- Secure AI pipeline development
- Automated security testing