Evaluates Unity Catalog governance best practices across 25 checks in 6 categories.
- Import this project to your Databricks workspace
- Run the
governance_analysis.ipynbnotebook with optional parameters:catalog_name(default:main) - Target catalog for results tableschema_name(default:default) - Target schema for results table
When running the notebook interactively, widgets will appear at the top:
- Set
catalog_nameto your target catalog (default:main) - Set
schema_nameto your target schema (default:default)
Pass parameters when running as a job:
dbutils.notebook.run(
"governance_analysis",
timeout_seconds=600,
arguments={"catalog_name": "my_catalog", "schema_name": "my_schema"}
)- Results saved to
{catalog}.{schema}.governance_resultsDelta table - Summary by category with pass rates and scores
- Overall governance score calculation
The notebook automatically deploys a Lakeview Dashboard to /Shared/Governance/ with:
- Total Checks Counter: Number of governance checks performed
- Total Score Counter: Aggregated governance score
- Pass Rate Counter: Percentage of checks that passed
- Average Score by Category: Bar chart showing performance across categories
- Status Distribution: Pie chart showing pass/fail distribution
- Detailed Results Table: Full governance check results
If automatic dashboard creation fails, you can manually import:
- Go to SQL Workspace → Dashboards → Import Dashboard
- Upload
dashboard_template.lvdash.json - Update the dataset query to your table:
{catalog}.{schema}.governance_results
- Metastore setup (2 checks)
- Identity (7 checks)
- Managed Storage (5 checks)
- Compute/Cluster Policy (1 check)
- Migration Completeness (3 checks)
- Audit & Lineage Coverage (3 checks)
- Privileges (3 checks)
governance_analyzer.py: Core governance check functions and dashboard deploymentgovernance_analysis.ipynb: Main notebook to run checks and deploy dashboarddashboard_template.lvdash.json: Lakeview dashboard definitionREADME.md: This documentation