A secure chatbot built with LangChain + OpenAI that detects and blocks prompt injection attempts using rule-based guardrails.
- LLM chatbot with memory
- Simple keyword-based red team filter
- Logging of blocked attempts
- CLI interface (Streamlit version coming soon)
- Output validation
- Guardrails.ai or LLM-based detectors
- Streamlit UI
pip install -r requirements.txt
python chatbot_with_guardrails.py