Skip to content

nicolasseverino/xplia

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

18 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

XPLIA: The Ultimate State-of-the-Art AI Explainability Library

XPLIA Logo

Python 3.8+ License: MIT PyPI version Documentation CI/CD codecov Code style: black Downloads

Production-grade explainability for trustworthy AI systems

Features β€’ Installation β€’ Quick Start β€’ Documentation β€’ Examples β€’ Contributing


🎯 What is XPLIA?

XPLIA (eXplainable AI Library) is the most comprehensive, production-ready explainability framework for AI/ML systems. Built for enterprise deployments, XPLIA provides:

  • 8+ Explanation Methods: SHAP, LIME, Counterfactuals, Gradients, Anchors, and more
  • Regulatory Compliance: GDPR, EU AI Act, HIPAA compliance tools built-in
  • Trust Evaluation: Uncertainty quantification, fairwashing detection, confidence reports
  • Framework Agnostic: Works with scikit-learn, TensorFlow, PyTorch, XGBoost, LightGBM, CatBoost
  • Production Ready: REST API, Docker, Kubernetes, MLflow/W&B integration
  • Enterprise Grade: Audit trails, compliance reports, multi-audience explanations

Perfect for: Financial services, Healthcare, Legal tech, Government, Any regulated industry


✨ Features

πŸ” Comprehensive Explainability Methods

Local Explanations

  • SHAP - Best for tree models
  • LIME - Model-agnostic, fast
  • Gradients - For neural networks
  • Counterfactuals - "What-if" scenarios
  • Anchors - Rule-based explanations

Global Explanations

  • Feature Importance - Model-wide patterns
  • Partial Dependence - Feature effects
  • Unified Explainer - Combines multiple methods
  • Attention Maps - For transformers
  • Model Summary - Complete overview

πŸ›οΈ Regulatory Compliance (Industry-First!)

from xplia.compliance import GDPRCompliance, AIActCompliance

# GDPR Right to Explanation
gdpr = GDPRCompliance(model)
dpia_report = gdpr.generate_dpia()  # PDF report ready for auditors

# EU AI Act Risk Assessment
ai_act = AIActCompliance(model, usage='credit_scoring')
risk = ai_act.assess_risk_category()  # Returns 'HIGH', 'MEDIUM', etc.
compliance_report = ai_act.generate_report()  # Full compliance documentation

Supported Regulations:

  • βœ… GDPR - Right to explanation, DPIA generation
  • βœ… EU AI Act - Risk assessment, documentation
  • βœ… HIPAA - Healthcare compliance
  • πŸ”œ SOC 2, ISO 27001 (Coming in v1.1)

πŸ›‘οΈ Trust & Confidence Evaluation

Uncertainty Quantification

Measure prediction confidence with 6 types of uncertainty:

from xplia.explainers.trust import UncertaintyQuantifier

uq = UncertaintyQuantifier(model, explainer)
uncertainty = uq.quantify(X_test)

print(f"Epistemic uncertainty: {uncertainty.epistemic_uncertainty}")
print(f"Aleatoric uncertainty: {uncertainty.aleatoric_uncertainty}")

Fairwashing Detection (Unique to XPLIA!)

Detect deceptive explanations that hide bias:

from xplia.explainers.trust import FairwashingDetector

detector = FairwashingDetector(model, explainer)
result = detector.detect(X_test, y_test)

if result.detected:
    print(f"⚠️ Fairwashing detected! Types: {result.fairwashing_types}")
    print(f"Severity: {result.severity}")

Detection Types:

  • Feature masking
  • Importance shift
  • Bias hiding
  • Cherry picking
  • Threshold manipulation

🎨 Advanced Visualizations

from xplia.visualizations import ChartGenerator

generator = ChartGenerator()

# 12+ chart types
generator.create_chart(
    chart_type='waterfall',  # bar, line, pie, heatmap, radar, sankey, etc.
    data=explanation.feature_importance,
    title='Feature Importance',
    theme='dark',  # light, dark, corporate
    export='report.html'  # html, png, pdf, svg
)

🌐 Multi-Audience Adaptation

Automatic explanation adaptation for different audiences:

from xplia.explainers.calibration import AudienceAdapter

adapter = AudienceAdapter()

# Technical explanation for data scientists
tech_exp = adapter.adapt(explanation, audience='expert')

# Business-friendly explanation for executives
business_exp = adapter.adapt(explanation, audience='basic')

# Public explanation for end users
public_exp = adapter.adapt(explanation, audience='novice')

Audience Levels:

  • πŸ‘¨β€πŸ’Ό Novice - General public
  • πŸ“Š Basic - Business stakeholders
  • πŸ”¬ Intermediate - Analysts
  • πŸŽ“ Advanced - Data scientists
  • πŸ‘¨β€πŸ”¬ Expert - ML researchers

πŸš€ Installation

Basic Installation (Lightweight, ~200MB)

pip install xplia

Includes: Core framework, basic visualizations, scikit-learn support

Full Installation (Recommended, ~2GB)

pip install xplia[full]

Includes everything: All XAI methods, deep learning, boosting, visualizations, APIs, ML Ops

Custom Installation (Choose what you need)

# XAI methods only
pip install xplia[xai]

# Deep learning support
pip install xplia[pytorch]  # or tensorflow

# Gradient boosting
pip install xplia[boosting]

# Advanced visualizations
pip install xplia[viz]

# API integrations
pip install xplia[api]

# ML Ops (MLflow, W&B)
pip install xplia[mlops]

# Development tools
pip install xplia[dev]

# Combine multiple
pip install xplia[xai,pytorch,viz,mlops]

From Source (Latest Development)

git clone https://github.com/nicolasseverino/xplia.git
cd xplia
pip install -e ".[full]"

⚑ Quick Start

30-Second Example

from xplia import create_explainer
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_iris

# Train a model
X, y = load_iris(return_X_y=True)
model = RandomForestClassifier()
model.fit(X, y)

# Create explainer (auto-detects best method for your model)
explainer = create_explainer(model, method='shap')

# Generate explanations
explanation = explainer.explain(X[:5])

# Access results
print(explanation.feature_importance)
print(explanation.quality_metrics)

Complete Production Example

from xplia import create_explainer, set_config
from xplia.compliance import GDPRCompliance, AIActCompliance
from xplia.explainers.trust import UncertaintyQuantifier, FairwashingDetector

# Configure XPLIA
set_config('verbosity', 'INFO')
set_config('n_jobs', -1)  # Use all CPU cores
set_config('cache_enabled', True)

# Create unified explainer (combines SHAP + LIME + Counterfactuals)
explainer = create_explainer(
    model,
    method='unified',
    methods=['shap', 'lime', 'counterfactual'],
    background_data=X_train
)

# Generate explanations
explanation = explainer.explain(X_test[:10])

# Check regulatory compliance
gdpr = GDPRCompliance(model, model_metadata={
    'name': 'Credit Scoring Model',
    'purpose': 'Loan approval',
    'legal_basis': 'legitimate_interest'
})
gdpr_report = gdpr.generate_dpia()
gdpr_report.export('gdpr_report.pdf')

ai_act = AIActCompliance(model, usage_intent='credit_scoring')
ai_act_report = ai_act.generate_compliance_report()

# Evaluate trust
uq = UncertaintyQuantifier(model, explainer)
uncertainty = uq.quantify(X_test)

detector = FairwashingDetector(model, explainer)
fairwashing = detector.detect(X_test, y_test)

# Generate comprehensive report
from xplia.visualizations import ChartGenerator
chart_gen = ChartGenerator()
chart_gen.create_dashboard(
    explanation,
    uncertainty=uncertainty,
    fairwashing=fairwashing,
    output='complete_report.html'
)

πŸ“š Documentation

Comprehensive Guides

Tutorials


πŸ’‘ Examples

Real-World Use Cases

1. Loan Approval System (Complete Example)

python examples/loan_approval_system.py

Features:

  • Model training and evaluation
  • Multiple explanation methods
  • GDPR and AI Act compliance
  • Trust evaluation
  • Audit trails
  • Production-ready code

2. Healthcare Diagnosis Explainability

# See examples/healthcare_diagnosis.py

3. Fraud Detection with Counterfactuals

# See examples/fraud_detection.py

API Integration

REST API with FastAPI

from xplia.api import create_api_app

app = create_api_app(models={'my_model': model})

# Run with: uvicorn main:app --host 0.0.0.0 --port 8000

Endpoints:

  • POST /explain - Generate explanations
  • POST /compliance - Check compliance
  • POST /trust/evaluate - Evaluate trust
  • GET /health - Health check

MLflow Integration

from xplia.integrations.mlflow import XPLIAMLflowLogger

with XPLIAMLflowLogger(experiment_name="my_experiment") as logger:
    # Train model
    model.fit(X, y)

    # Log with explanations
    explainer = create_explainer(model)
    explanation = explainer.explain(X_test)
    logger.log_explanation(explanation)

Weights & Biases Integration

from xplia.integrations.wandb import XPLIAWandBContext

with XPLIAWandBContext(project="my-project") as logger:
    # Train and log
    model.fit(X, y)

    explainer = create_explainer(model)
    explanation = explainer.explain(X_test)
    logger.log_explanation(explanation)

🐳 Deployment

Docker

# Build image
docker build -t xplia:latest .

# Run API server
docker run -p 8000:8000 xplia:latest

# Or use docker-compose
docker-compose up

Kubernetes

# Deploy to Kubernetes
kubectl apply -f kubernetes/deployment.yaml

# Check status
kubectl get pods -l app=xplia

# Access API
kubectl port-forward svc/xplia-api-service 8000:80

Features:

  • Horizontal Pod Autoscaling
  • Health checks
  • Persistent volumes
  • Load balancing

πŸ—οΈ Architecture

High-Level Overview

User Interface (CLI, API, Notebooks)
            ↓
    Public API Layer
            ↓
     Core Framework
    (Factory, Registry, Config)
            ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Explanation Layer                β”‚
β”‚  (SHAP, LIME, Unified, etc.)     β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Model Adapter Layer              β”‚
β”‚  (sklearn, TF, PyTorch, XGBoost) β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Compliance & Trust Layer         β”‚
β”‚  (GDPR, AI Act, Uncertainty)     β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Visualization Layer              β”‚
β”‚  (Charts, Reports, Dashboards)    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key Design Patterns:

  • Adapter Pattern - Unified interface for all ML frameworks
  • Factory Pattern - Dynamic explainer creation
  • Registry Pattern - Component discovery and versioning
  • Strategy Pattern - Runtime algorithm selection

See ARCHITECTURE.md for complete details.


πŸ§ͺ Testing

# Run all tests
pytest tests/ -v

# Run with coverage
pytest tests/ --cov=xplia --cov-report=html

# Run specific test suite
pytest tests/explainers/ -v

# Run benchmarks
pytest tests/benchmarks/ -m benchmark

Test Coverage: 50%+ (6000+ lines of test code)


🀝 Contributing

We welcome contributions! Please see:

Quick Contribution Workflow

# Fork and clone
git clone https://github.com/YOUR-USERNAME/xplia.git

# Create branch
git checkout -b feature/amazing-feature

# Make changes, add tests, update docs

# Run tests
pytest tests/ -v

# Commit and push
git commit -m "feat: Add amazing feature"
git push origin feature/amazing-feature

# Open Pull Request on GitHub

πŸ“Š Comparison with Other Libraries

Feature XPLIA SHAP LIME Alibi InterpretML
Methods 8+ 1 1 5 2
GDPR Compliance βœ… ❌ ❌ ❌ ❌
Fairwashing Detection βœ… ❌ ❌ ❌ ❌
Multi-Audience βœ… ❌ ❌ ❌ ❌
REST API βœ… ❌ ❌ ❌ ❌
Uncertainty βœ… ❌ ❌ βœ… ❌
Production Ready βœ… ⚠️ ⚠️ βœ… ⚠️
Test Coverage 50%+ 60%+ 50%+ 70%+ 55%+

XPLIA Advantage: Only library with built-in compliance, fairwashing detection, and production deployment tools.


πŸ—ΊοΈ Roadmap

v1.1.0 (Q1 2026)

  • Interactive web dashboard (React)
  • Additional compliance (SOC 2, ISO 27001)
  • Automated model monitoring
  • Advanced fairness metrics

v1.2.0 (Q2 2026)

  • Distributed computing (Spark, Dask)
  • Quantum ML explainability
  • Federated learning support
  • AutoML integration

v2.0.0 (Q3 2026)

  • Causal inference integration
  • Time series explainability
  • Graph neural network support
  • Multi-language support (R, Julia)

πŸ“œ License

XPLIA is licensed under the MIT License. See LICENSE for details.


πŸ“ž Support & Community


πŸ™ Acknowledgments

XPLIA is built on the shoulders of giants:

  • SHAP by Scott Lundberg
  • LIME by Marco Tulio Ribeiro
  • Alibi by Seldon
  • InterpretML by Microsoft

Special thanks to all contributors and the open-source community.


πŸ“– Citation

If you use XPLIA in your research, please cite:

@software{xplia2025,
  author = {Severino, Nicolas and contributors},
  title = {XPLIA: The Ultimate State-of-the-Art AI Explainability Library},
  url = {https://github.com/nicolasseverino/xplia},
  version = {1.0.0},
  year = {2025},
  doi = {10.5281/zenodo.XXXXXXX}
}

⭐ Star us on GitHub β€’ πŸ“¦ Install Now β€’ πŸ“– Read the Docs

Made with ❀️ by the XPLIA Team

About

XPLIA - State-of-the-art AI explainability library (2025). Features multimodal explainability, fairwashing detection, uncertainty quantification, and regulatory compliance tools for trustworthy AI systems.

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages