An LLM-based pipeline to detect toxic speech.
π§ Highly configurable via YAML configuration files
π― Multi-stage analysis with preparatory questions and configurable indicators
π₯οΈ Multiple interfaces: CLI, Python API, and interactive Gradio web UI
πΎ Built-in result serialization for auditing and analysis
π Flexible model support: Compatible with OpenAI, Hugging Face, and other LangChain-supported providers
π Transparent reasoning: Get detailed explanations alongside toxicity verdicts
The Toxicity Detector is a configurable pipeline that uses a Large Language Model (LLM) to analyze a text and decide whether it contains toxic speech.
It supports two toxicity types out of the box:
personalized_toxicity: toxic speech directed at a specific individual (insults, threats, harassment, β¦)hatespeech: group-based toxicity / hate speech (targeting groups or individuals because of group membership)
Both toxicity types are defined in the pipeline configuration file under the toxicities: section.
At a high level the pipeline works as follows:
- Preprocessing / preparatory analysis: the model answers βgeneral questionsβ that help it interpret the input (e.g., who is targeted, irony/quotes/context).
- Indicator analysis: the model evaluates a set of configurable indicators (tasks) that represent typical forms of toxicity (e.g., threats, insults, victim shaming).
- Final decision: the pipeline aggregates these intermediate results and returns:
contains_toxicity: one oftrue,false,unclearanalysis_result: a human-readable explanation
The indicators and the phrasing of the model prompts are configurable via YAML.
- Python 3.12 or higher
Install the toxicity-detector package via PyPi (e.g., by using pip):
pip install toxicity-detectorYou need a pipeline configuration (YAML) to run toxicity detection. This repo ships example configs in config/:
config/pipeline_config.yaml: pipeline configuration used by the CLI and Python APIconfig/app_config.yaml: configuration for the Gradio demo app (optional)
Start by copying the example files and adjusting them to your environment (models, API keys, storage paths).
API keys are referenced by name in the pipeline config (e.g., API_KEY_NAME) and are expected to be present as environment variables.
Create a .env file in the project root with the following variables:
# API Keys (by the names as specified in the model config files)
API_KEY_NAME=your_api_key_valueAlternatively, you can set the environment variables in your shell/session (instead of using .env).
The simplest way to run toxicity detection from the command line (within the environment you installed the toxicity package into):
# Basic usage
toxicity-detector detect \
--text "Your text to analyze" \
--pipeline-config ./config/pipeline_config.yaml
# With all options
toxicity-detector detect \
--text "Your text to analyze" \
--pipeline-config ./config/pipeline_config.yaml \
--toxicity-type personalized_toxicity \
--source "chat" \
--context "Additional context here" \
--save \
--verbosefrom toxicity_detector import detect_toxicity, PipelineConfig
# Load pipeline configuration from YAML file
pipeline_config = PipelineConfig.from_file('./config/pipeline_config.yaml')
# The text to analyze for toxicity
input_text = 'Peter is dumn.'
# Run toxicity detection
result = detect_toxicity(
input_text=input_text, # The text to be analyzed
user_input_source=None, # Optional: identifier for the source of the input (e.g., 'chat', 'comment')
toxicity_type='personalized_toxicity', # Type of toxicity analysis to perform ('personalized_toxicity' or 'hatespeech')
context_info=None, # Optional: additional context about the conversation or situation
pipeline_config=pipeline_config, # Configuration specifying model, paths, and behavior
serialize_result=True, # If True, saves the result to disk as YAML
)
# Display the analysis result and toxicity verdict
print(result.answer['contains_toxicity'])We also provide an example notebook that demonstrates how to run the toxicity detection pipeline with a Hugging Face API key.
The project includes a Gradio web interface for interactive toxicity detection.
Run the app using the simple command:
# With app configuration file
toxicity-detector app --app-config ./config/app_config.yaml
# With pipeline configuration file (uses default app settings)
toxicity-detector app --pipeline-config ./config/pipeline_config.yaml
# With custom server settings
toxicity-detector app \
--app-config ./config/app_config.yaml \
--server-port 8080 \
--shareThe app will start and be accessible at http://localhost:7860 by default (or your specified port).
To enable developer mode with additional configuration options, update your config/app_config.yaml:
developer_mode: trueNote: the configuration tab is only shown when developer_mode: true. If force_agreement: true, you must accept the agreement first.
Additional information about the different settings can be found in the config/app_config.yaml.
The pipeline is configured via a YAML file that is loaded into the Pydantic model PipelineConfig.
- Config schema/model:
src/toxicity_detector/config.py(class PipelineConfig) - Main entry point:
src/toxicity_detector/backend.py(detect_toxicity(...))
Key sections in config/pipeline_config.yaml:
- Model selection:
used_chat_modeland themodels:dictionary (provider/model/base_url +api_key_name) - Storage:
local_serialization,local_base_path,result_data_path,log_path,subdirectory_construction - Toxicity definitions:
toxicities:(currentlypersonalized_toxicityandhatespeech)- Each toxicity type contains
tasks:which includesprepatory_analysis.general_questionsindicator_analysis.*(your indicator list)
- Each toxicity type contains
- Prompts: prompt templates are configurable (see
prompt_templatesin the default pipeline config)
If you want to start from a known-good baseline, the package contains a default pipeline config with all default prompts here:
src/toxicity_detector/package_data/default_pipeline_config.yaml.
Additional information about the different settings can be found in the config/pipeline_config.yaml.
High-level overview of the repository layout:
toxicity-detector/
βββ config/ # Configuration template files
β βββ app_config.yaml # Gradio app configuration (AppConfig)
β βββ pipeline_config.yaml # Pipeline configuration (PipelineConfig)
βββ src/
β βββ toxicity_detector/
β βββ __init__.py
β βββ app/ # Gradio web interface (modularized)
β β βββ app.py
β β βββ app_config_loader.py
β β βββ agreement_tab.py
β β βββ config_tab.py
β β βββ detection_tab.py
β βββ backend.py # Core detection logic (detect_toxicity)
β βββ chains.py # LangChain pipelines
β βββ cli.py # CLI entry point (toxicity-detector)
β βββ config.py # Pydantic config models
β βββ managers/ # Config and persistence utilities
βββ pyproject.toml # Project dependencies
βββ README.md # This file
This project uses uv for dependency management.
- Python 3.12 or higher
- uv package manager
-
Install uv (if not already installed):
-
Clone the repository:
git clone https://github.com/debatelab/toxicity-detector.git cd toxicity-detector -
Install dependencies:
uv sync
This will create a virtual environment and install all dependencies specified in
pyproject.toml. If auv.lockis present,uvwill reproduce the environment specified in that file. If you want to start with a fresh environment and/or use other package versions, remove or update theuv.lockaccordingly. -
Install development dependencies (optional):
uv sync --group dev
Run all tests:
uv run pytestRun tests with verbose output:
uv run pytest -vRun a specific test file:
uv run pytest tests/test_config.pyRun tests with coverage report:
uv run pytest --cov=src/toxicity_detectorAlternative: Using the activated virtual environment:
# Activate the virtual environment first
source .venv/bin/activate # On Linux/Mac
# or
.venv\Scripts\activate # On Windows
# Then run pytest directly
pytest tests/
pytest tests/test_config.py -vTo use Jupyter notebooks for development:
# Install dev dependencies if not already done
uv sync --group dev
# Start Jupyter
uv run jupyter notebook notebooks/- LangChain: Workflow orchestration
- Gradio: Interactive web interface
- Pydantic: Data validation and configuration management
- Hugging Face: Model hosting and deployment
The Toxicity Detector was implemented as part of the project "Opportunities of AI to Strengthen Our Deliberative Culture" (KIdeKu) which was funded by the Federal Ministry of Education, Family Affairs, Senior Citizens, Women and Youth (BMBFSFJ).
This project is licensed under the MIT License. See LICENSE.
