Production-grade explainability for trustworthy AI systems
Features β’ Installation β’ Quick Start β’ Documentation β’ Examples β’ Contributing
XPLIA (eXplainable AI Library) is the most comprehensive, production-ready explainability framework for AI/ML systems. Built for enterprise deployments, XPLIA provides:
- 8+ Explanation Methods: SHAP, LIME, Counterfactuals, Gradients, Anchors, and more
- Regulatory Compliance: GDPR, EU AI Act, HIPAA compliance tools built-in
- Trust Evaluation: Uncertainty quantification, fairwashing detection, confidence reports
- Framework Agnostic: Works with scikit-learn, TensorFlow, PyTorch, XGBoost, LightGBM, CatBoost
- Production Ready: REST API, Docker, Kubernetes, MLflow/W&B integration
- Enterprise Grade: Audit trails, compliance reports, multi-audience explanations
Perfect for: Financial services, Healthcare, Legal tech, Government, Any regulated industry
|
Local Explanations
|
Global Explanations
|
from xplia.compliance import GDPRCompliance, AIActCompliance
# GDPR Right to Explanation
gdpr = GDPRCompliance(model)
dpia_report = gdpr.generate_dpia() # PDF report ready for auditors
# EU AI Act Risk Assessment
ai_act = AIActCompliance(model, usage='credit_scoring')
risk = ai_act.assess_risk_category() # Returns 'HIGH', 'MEDIUM', etc.
compliance_report = ai_act.generate_report() # Full compliance documentationSupported Regulations:
- β GDPR - Right to explanation, DPIA generation
- β EU AI Act - Risk assessment, documentation
- β HIPAA - Healthcare compliance
- π SOC 2, ISO 27001 (Coming in v1.1)
Measure prediction confidence with 6 types of uncertainty:
from xplia.explainers.trust import UncertaintyQuantifier
uq = UncertaintyQuantifier(model, explainer)
uncertainty = uq.quantify(X_test)
print(f"Epistemic uncertainty: {uncertainty.epistemic_uncertainty}")
print(f"Aleatoric uncertainty: {uncertainty.aleatoric_uncertainty}")Detect deceptive explanations that hide bias:
from xplia.explainers.trust import FairwashingDetector
detector = FairwashingDetector(model, explainer)
result = detector.detect(X_test, y_test)
if result.detected:
print(f"β οΈ Fairwashing detected! Types: {result.fairwashing_types}")
print(f"Severity: {result.severity}")Detection Types:
- Feature masking
- Importance shift
- Bias hiding
- Cherry picking
- Threshold manipulation
from xplia.visualizations import ChartGenerator
generator = ChartGenerator()
# 12+ chart types
generator.create_chart(
chart_type='waterfall', # bar, line, pie, heatmap, radar, sankey, etc.
data=explanation.feature_importance,
title='Feature Importance',
theme='dark', # light, dark, corporate
export='report.html' # html, png, pdf, svg
)Automatic explanation adaptation for different audiences:
from xplia.explainers.calibration import AudienceAdapter
adapter = AudienceAdapter()
# Technical explanation for data scientists
tech_exp = adapter.adapt(explanation, audience='expert')
# Business-friendly explanation for executives
business_exp = adapter.adapt(explanation, audience='basic')
# Public explanation for end users
public_exp = adapter.adapt(explanation, audience='novice')Audience Levels:
- π¨βπΌ Novice - General public
- π Basic - Business stakeholders
- π¬ Intermediate - Analysts
- π Advanced - Data scientists
- π¨βπ¬ Expert - ML researchers
pip install xpliaIncludes: Core framework, basic visualizations, scikit-learn support
pip install xplia[full]Includes everything: All XAI methods, deep learning, boosting, visualizations, APIs, ML Ops
# XAI methods only
pip install xplia[xai]
# Deep learning support
pip install xplia[pytorch] # or tensorflow
# Gradient boosting
pip install xplia[boosting]
# Advanced visualizations
pip install xplia[viz]
# API integrations
pip install xplia[api]
# ML Ops (MLflow, W&B)
pip install xplia[mlops]
# Development tools
pip install xplia[dev]
# Combine multiple
pip install xplia[xai,pytorch,viz,mlops]git clone https://github.com/nicolasseverino/xplia.git
cd xplia
pip install -e ".[full]"from xplia import create_explainer
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_iris
# Train a model
X, y = load_iris(return_X_y=True)
model = RandomForestClassifier()
model.fit(X, y)
# Create explainer (auto-detects best method for your model)
explainer = create_explainer(model, method='shap')
# Generate explanations
explanation = explainer.explain(X[:5])
# Access results
print(explanation.feature_importance)
print(explanation.quality_metrics)from xplia import create_explainer, set_config
from xplia.compliance import GDPRCompliance, AIActCompliance
from xplia.explainers.trust import UncertaintyQuantifier, FairwashingDetector
# Configure XPLIA
set_config('verbosity', 'INFO')
set_config('n_jobs', -1) # Use all CPU cores
set_config('cache_enabled', True)
# Create unified explainer (combines SHAP + LIME + Counterfactuals)
explainer = create_explainer(
model,
method='unified',
methods=['shap', 'lime', 'counterfactual'],
background_data=X_train
)
# Generate explanations
explanation = explainer.explain(X_test[:10])
# Check regulatory compliance
gdpr = GDPRCompliance(model, model_metadata={
'name': 'Credit Scoring Model',
'purpose': 'Loan approval',
'legal_basis': 'legitimate_interest'
})
gdpr_report = gdpr.generate_dpia()
gdpr_report.export('gdpr_report.pdf')
ai_act = AIActCompliance(model, usage_intent='credit_scoring')
ai_act_report = ai_act.generate_compliance_report()
# Evaluate trust
uq = UncertaintyQuantifier(model, explainer)
uncertainty = uq.quantify(X_test)
detector = FairwashingDetector(model, explainer)
fairwashing = detector.detect(X_test, y_test)
# Generate comprehensive report
from xplia.visualizations import ChartGenerator
chart_gen = ChartGenerator()
chart_gen.create_dashboard(
explanation,
uncertainty=uncertainty,
fairwashing=fairwashing,
output='complete_report.html'
)- Installation Guide - Detailed setup for all platforms
- Architecture - System design and patterns
- Plugin Development - Create custom explainers
- FAQ - Common questions and troubleshooting
- API Reference - Complete API documentation
- Explaining scikit-learn Models
- Explaining Deep Learning Models
- GDPR Compliance Workflow
- Production Deployment
python examples/loan_approval_system.pyFeatures:
- Model training and evaluation
- Multiple explanation methods
- GDPR and AI Act compliance
- Trust evaluation
- Audit trails
- Production-ready code
# See examples/healthcare_diagnosis.py# See examples/fraud_detection.pyfrom xplia.api import create_api_app
app = create_api_app(models={'my_model': model})
# Run with: uvicorn main:app --host 0.0.0.0 --port 8000Endpoints:
POST /explain- Generate explanationsPOST /compliance- Check compliancePOST /trust/evaluate- Evaluate trustGET /health- Health check
from xplia.integrations.mlflow import XPLIAMLflowLogger
with XPLIAMLflowLogger(experiment_name="my_experiment") as logger:
# Train model
model.fit(X, y)
# Log with explanations
explainer = create_explainer(model)
explanation = explainer.explain(X_test)
logger.log_explanation(explanation)from xplia.integrations.wandb import XPLIAWandBContext
with XPLIAWandBContext(project="my-project") as logger:
# Train and log
model.fit(X, y)
explainer = create_explainer(model)
explanation = explainer.explain(X_test)
logger.log_explanation(explanation)# Build image
docker build -t xplia:latest .
# Run API server
docker run -p 8000:8000 xplia:latest
# Or use docker-compose
docker-compose up# Deploy to Kubernetes
kubectl apply -f kubernetes/deployment.yaml
# Check status
kubectl get pods -l app=xplia
# Access API
kubectl port-forward svc/xplia-api-service 8000:80Features:
- Horizontal Pod Autoscaling
- Health checks
- Persistent volumes
- Load balancing
User Interface (CLI, API, Notebooks)
β
Public API Layer
β
Core Framework
(Factory, Registry, Config)
β
βββββββββββββββββββββββββββββββββββββ
β Explanation Layer β
β (SHAP, LIME, Unified, etc.) β
βββββββββββββββββββββββββββββββββββββ€
β Model Adapter Layer β
β (sklearn, TF, PyTorch, XGBoost) β
βββββββββββββββββββββββββββββββββββββ€
β Compliance & Trust Layer β
β (GDPR, AI Act, Uncertainty) β
βββββββββββββββββββββββββββββββββββββ€
β Visualization Layer β
β (Charts, Reports, Dashboards) β
βββββββββββββββββββββββββββββββββββββ
Key Design Patterns:
- Adapter Pattern - Unified interface for all ML frameworks
- Factory Pattern - Dynamic explainer creation
- Registry Pattern - Component discovery and versioning
- Strategy Pattern - Runtime algorithm selection
See ARCHITECTURE.md for complete details.
# Run all tests
pytest tests/ -v
# Run with coverage
pytest tests/ --cov=xplia --cov-report=html
# Run specific test suite
pytest tests/explainers/ -v
# Run benchmarks
pytest tests/benchmarks/ -m benchmarkTest Coverage: 50%+ (6000+ lines of test code)
We welcome contributions! Please see:
- Contributing Guide - How to contribute
- Code of Conduct - Community standards
- Development Setup - Get started
# Fork and clone
git clone https://github.com/YOUR-USERNAME/xplia.git
# Create branch
git checkout -b feature/amazing-feature
# Make changes, add tests, update docs
# Run tests
pytest tests/ -v
# Commit and push
git commit -m "feat: Add amazing feature"
git push origin feature/amazing-feature
# Open Pull Request on GitHub| Feature | XPLIA | SHAP | LIME | Alibi | InterpretML |
|---|---|---|---|---|---|
| Methods | 8+ | 1 | 1 | 5 | 2 |
| GDPR Compliance | β | β | β | β | β |
| Fairwashing Detection | β | β | β | β | β |
| Multi-Audience | β | β | β | β | β |
| REST API | β | β | β | β | β |
| Uncertainty | β | β | β | β | β |
| Production Ready | β | β | |||
| Test Coverage | 50%+ | 60%+ | 50%+ | 70%+ | 55%+ |
XPLIA Advantage: Only library with built-in compliance, fairwashing detection, and production deployment tools.
- Interactive web dashboard (React)
- Additional compliance (SOC 2, ISO 27001)
- Automated model monitoring
- Advanced fairness metrics
- Distributed computing (Spark, Dask)
- Quantum ML explainability
- Federated learning support
- AutoML integration
- Causal inference integration
- Time series explainability
- Graph neural network support
- Multi-language support (R, Julia)
XPLIA is licensed under the MIT License. See LICENSE for details.
- Documentation: https://xplia.readthedocs.io
- GitHub Issues: https://github.com/nicolasseverino/xplia/issues
- Discussions: https://github.com/nicolasseverino/xplia/discussions
- Email: contact@xplia.com
- Twitter: @XPLIALib
XPLIA is built on the shoulders of giants:
- SHAP by Scott Lundberg
- LIME by Marco Tulio Ribeiro
- Alibi by Seldon
- InterpretML by Microsoft
Special thanks to all contributors and the open-source community.
If you use XPLIA in your research, please cite:
@software{xplia2025,
author = {Severino, Nicolas and contributors},
title = {XPLIA: The Ultimate State-of-the-Art AI Explainability Library},
url = {https://github.com/nicolasseverino/xplia},
version = {1.0.0},
year = {2025},
doi = {10.5281/zenodo.XXXXXXX}
}β Star us on GitHub β’ π¦ Install Now β’ π Read the Docs
Made with β€οΈ by the XPLIA Team