Open-source Enterprise-grade AI Guardrails and Lightweight AI Security Gateway, with support for user-defined scanners and custom model training.
-
Updated
Dec 31, 2025 - Python
Open-source Enterprise-grade AI Guardrails and Lightweight AI Security Gateway, with support for user-defined scanners and custom model training.
NudeDetect is a Python-based tool for detecting nudity and adult content in images. This project combines the capabilities of the NudeNet library, EasyOCR for text detection, and the Better Profanity library for identifying offensive language in text.
An intelligent task management assistant built with .NET, Next.js, Microsoft Agent Framework, AG-UI protocol, and Azure OpenAI, demonstrating Clean Architecture and autonomous AI agent capabilities
Step-by-Step tutorial that teaches you how to use Azure Safety Content - the prebuilt AI service that helps ensure that content sent to user is filtered to safeguard them from risky or undesirable outcomes
🔍 Benchmark jailbreak resilience in LLMs with JailBench for clear insights and improved model defenses against jailbreak attempts.
Benchmark LLM jailbreak resilience across providers with standardized tests, adversarial mode, rich analytics, and a clean Web UI.
Production-Grade LLM Alignment Engine (TruthProbe + ADT)
Content moderation (text and image) in a social network demo
A Chrome extension that uses Claude AI to protect users under 18 from inappropriate content by analyzing webpage content in real-time.
Study Buddy is a user-friendly AI-powered web app that helps students generate safe, factual study notes and Q&A on any topic. It features user accounts, study history, and strong content safety filters—making learning interactive and secure.
profanity checker text moderation
SentinelShield: Advanced AI content moderation combining Llama Prompt Guard 2, rule-based filtering, and real-time analysis. Protect your applications from harmful content, prompt injection attacks, and inappropriate material with sub-second response times.
Context hygiene & risk adjudication for LLM pipelines: secrets, PII, prompt-injection, policy redaction & tokenization.
Impact Analyzer is a web app that helps you detect toxicity and analyze nuance in your writing before publishing, ensuring your content is respectful, clear, and aligned with your intent.
Azure safety content example using python for text analysis
Technical presentations with hands-on demos
This tutorial demonstrates how to use the Google Cloud Natural Language API for text moderation. It provides a step-by-step guide to detecting and managing harmful content while promoting responsible AI practices.
promptshield 🛡️ : Tech & Social Media #Content-Safety
Real-time speech-to-text system with toxic content detection and filtering. Transcribes live audio using multiple ASR options while automatically detecting and masking harmful language.
A 3-tier diagnostic application designed for hands-on learning about securing AI systems across identity, network, application, and content safety domains.
Add a description, image, and links to the content-safety topic page so that developers can more easily learn about it.
To associate your repository with the content-safety topic, visit your repo's landing page and select "manage topics."