Skip to content

Native XAI Support with SHAP Values and LIME Explanations in PMML #465

@olowonosmall

Description

@olowonosmall

Description:
Integrate explainable AI techniques directly into PMML exports by embedding SHAP
values, LIME explanations, and other interpretability artifacts. This would make
models self-documenting and enable downstream systems to provide explanations
without requiring the original Python environment.

Technical Scope:

  • SHAP value computation and embedding in PMML for each feature
  • LIME local explanation generation and export
  • Feature importance scores with confidence intervals
  • Partial dependence plots (PDP) and Individual Conditional Expectation (ICE) data
  • Counterfactual explanation generation and export
  • Anchors and rule-based explanations for complex models
  • Attention weights for neural network models
  • Global and local explanation metadata in PMML MiningSchema

Implementation Challenges:

  • Extending PMML schema to accommodate explanation data
  • Balancing model size with explanation detail
  • Ensuring explanation accuracy across different PMML consumers
  • Computing explanations efficiently for large models

Use Cases:

  • Regulated industries requiring model explainability (finance, healthcare)
  • Customer-facing applications needing prediction justifications
  • Model debugging and validation workflows
  • Compliance with AI transparency regulations (EU AI Act, GDPR)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions