Tueri is a comprehensive tool designed to fortify the security of Large Language Models (LLMs).
By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, Tueri ensures that your interactions with LLMs remain safe and secure.
Important Notes:
- Tueri is designed for easy integration and deployment in production environments. While it's ready to use out-of-the-box, please be informed that we're constantly improving and updating the repository.
- Base functionality requires a limited number of libraries. As you explore more advanced features, necessary libraries will be automatically installed.
- Ensure you're using Python version 3.9 or higher. Confirm with:
python --version. - Library installation issues? Consider upgrading pip:
python -m pip install --upgrade pip.
Examples:
- Deploy Tueri as API
- Anonymize
- BanCompetitors
- BanSubstrings
- BanTopics
- InvisibleText
- Language
- MaskCode
- PromptInjection
- Regex
- Secrets
- Sentiment
- TokenLimit
- BadURL
- BanCompetitors
- BanSubstrings
- BanTopics
- Bias
- Deanonymize
- FactualConsistency
- JSON
- Language
- LanguageSame
- MaskCode
- NoRefusal
- Regex
- Relevance
- Sensitive
- Sentiment
We learned the design and reused code from https://github.com/protectai/llm-guard