Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions monitoring/fabric-unified-admin-monitoring/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ FUAM extracts the following data from the tenant:
- Tenant meta data (Scanner API)
- Capacity Refreshables
- Git Connections
- **Feature Releases & Preview Tracking** (NEW)
- Engine level insights (coming soon in optimization module)


Expand Down Expand Up @@ -104,6 +105,7 @@ The FUAM solution accelerator template **is not an official Microsoft service**.
- [Documentation - FUAM Architecture](/monitoring/fabric-unified-admin-monitoring/media/documentation/FUAM_Architecture.md)
- [Documentation - FUAM Lakehouse table lineage](/monitoring/fabric-unified-admin-monitoring/media/documentation/FUAM_Documentation_Lakehouse_table_lineage.pdf)
- [Documentation - FUAM Engine level analyzer reports](/monitoring/fabric-unified-admin-monitoring/media/documentation/FUAM_Engine_Level_Analyzer_Reports.md)
- [Documentation - FUAM Feature Release Tracking](/monitoring/fabric-unified-admin-monitoring/media/documentation/Feature_Release_Tracking_Documentation.md)

##### Some other Fabric Toolbox assets
- [Overview - Fabric Cost Analysis](/monitoring/fabric-cost-analysis/README.md)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -190,5 +190,17 @@
{
"name": "FUAM_Gateway_Monitoring_From_Files_Report.Report",
"fuam_id": "3695cd0e-da7e-3b40-ad28-5bd1bcd33eb6"
},
{
"name": "01_Setup_Feature_Tracking.Notebook",
"fuam_id": "f8a2b1c3-4d5e-6f7a-8b9c-0d1e2f3a4b5c"
},
{
"name": "02_Load_Feature_Tracking.Notebook",
"fuam_id": "a1b2c3d4-5e6f-7a8b-9c0d-1e2f3a4b5c6d"
},
{
"name": "Load_Feature_Tracking_E2E.DataPipeline",
"fuam_id": "b2c3d4e5-6f7a-8b9c-0d1e-2f3a4b5c6d7e"
}
]
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
{
"$schema": "https://developer.microsoft.com/json-schemas/fabric/gitIntegration/platformProperties/2.0.0/schema.json",
"metadata": {
"type": "Notebook",
"displayName": "Feature_Tracking_Setup",
"description": "One-time setup for feature tracking tables and views"
},
"config": {
"version": "2.0",
"logicalId": "00000000-0000-0000-0000-000000000000"
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,323 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1a0fcaaa",
"metadata": {},
"source": [
"# Feature Tracking - Setup Tables and Views\n",
"\n",
"**Purpose**: One-time setup for feature tracking tables and views\n",
"\n",
"**What this creates**:\n",
"- βœ… `feature_releases_roadmap` - Feature releases from Fabric GPS API (with roadmap)\n",
"- βœ… `preview_features_active` - Detected activated preview features\n",
"- βœ… `feature_alerts` - Alerts for new/risky preview features\n",
"- βœ… Helper SQL views for easy querying"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c5c91aa2",
"metadata": {},
"outputs": [],
"source": [
"from datetime import datetime\n",
"import pandas as pd"
Comment on lines +26 to +27
Copy link

Copilot AI Nov 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Missing import statement. The notebook uses spark.sql() and spark.read.format() extensively but never imports or initializes the spark session. While this may work in the Fabric notebook environment where spark is pre-initialized, it's a best practice to include a comment indicating this dependency on the pre-configured environment or to explicitly show the spark session is available.

Suggested change
"from datetime import datetime\n",
"import pandas as pd"
"# NOTE: The Spark session (`spark`) is pre-initialized by the Fabric notebook environment.\n",
"from datetime import datetime\n",
"import pandas as pd"

Copilot uses AI. Check for mistakes.
Comment on lines +26 to +27
Copy link

Copilot AI Nov 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused import. The pandas library is imported on line 27 but is never used in the notebook. This import should be removed to keep the code clean and avoid unnecessary dependencies.

Suggested change
"from datetime import datetime\n",
"import pandas as pd"
"from datetime import datetime\n"

Copilot uses AI. Check for mistakes.
]
},
{
"cell_type": "markdown",
"id": "505200d2",
"metadata": {},
"source": [
"## Step 1: Create `feature_releases_roadmap` Table"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6122dce7",
"metadata": {},
"outputs": [],
"source": [
"print(\"πŸ”„ Creating table: feature_releases_roadmap\")\n",
"print(\"=\" * 70)\n",
"\n",
"spark.sql(\"\"\"\n",
" CREATE TABLE IF NOT EXISTS feature_releases_roadmap (\n",
" feature_id STRING NOT NULL,\n",
" feature_name STRING NOT NULL,\n",
" feature_description STRING,\n",
" workload STRING,\n",
" product_name STRING,\n",
" release_date TIMESTAMP,\n",
" release_type STRING,\n",
" release_status STRING,\n",
" is_preview BOOLEAN NOT NULL,\n",
" is_planned BOOLEAN NOT NULL,\n",
" is_shipped BOOLEAN NOT NULL,\n",
" last_modified TIMESTAMP NOT NULL,\n",
" source_url STRING,\n",
" source STRING,\n",
" extracted_date TIMESTAMP NOT NULL\n",
" )\n",
" USING DELTA\n",
"\"\"\")\n",
"\n",
"print(\"βœ… Table created: feature_releases_roadmap\")\n",
"print(\" Schema: 15 columns\")\n",
"print(\" πŸ’‘ Includes planned/future features and historical tracking\")"
]
},
{
"cell_type": "markdown",
"id": "49385cd0",
"metadata": {},
"source": [
"## Step 2: Create `preview_features_active` Table"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e3dcfc16",
"metadata": {},
"outputs": [],
"source": [
"print(\"\\nπŸ”„ Creating table: preview_features_active\")\n",
"print(\"=\" * 70)\n",
"\n",
"spark.sql(\"\"\"\n",
" CREATE TABLE IF NOT EXISTS preview_features_active (\n",
" setting_name STRING NOT NULL,\n",
" feature_id STRING NOT NULL,\n",
" feature_name STRING NOT NULL,\n",
" workload STRING,\n",
" similarity_score DOUBLE NOT NULL,\n",
" is_enabled BOOLEAN NOT NULL,\n",
" delegate_to_tenant BOOLEAN,\n",
" detected_date TIMESTAMP NOT NULL,\n",
" release_date TIMESTAMP,\n",
" release_status STRING,\n",
" source_url STRING,\n",
" days_since_release INT\n",
" )\n",
" USING DELTA\n",
"\"\"\")\n",
"\n",
"print(\"βœ… Table created: preview_features_active\")\n",
"print(\" Schema: 12 columns\")"
]
},
{
"cell_type": "markdown",
"id": "b8337a78",
"metadata": {},
"source": [
"## Step 3: Create `feature_alerts` Table"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "66383382",
"metadata": {},
"outputs": [],
"source": [
"print(\"\\nπŸ”„ Creating table: feature_alerts\")\n",
"print(\"=\" * 70)\n",
"\n",
"spark.sql(\"\"\"\n",
" CREATE TABLE IF NOT EXISTS feature_alerts (\n",
" alert_id STRING NOT NULL,\n",
" feature_id STRING NOT NULL,\n",
" feature_name STRING NOT NULL,\n",
" workload STRING,\n",
" alert_type STRING NOT NULL,\n",
" severity STRING NOT NULL,\n",
" message STRING NOT NULL,\n",
" setting_name STRING,\n",
" similarity_score DOUBLE,\n",
" days_since_release INT,\n",
" alert_date TIMESTAMP NOT NULL,\n",
" acknowledged BOOLEAN NOT NULL,\n",
" acknowledged_date TIMESTAMP,\n",
" acknowledged_by STRING\n",
" )\n",
" USING DELTA\n",
"\"\"\")\n",
"\n",
"print(\"βœ… Table created: feature_alerts\")\n",
"print(\" Schema: 14 columns\")"
]
},
{
"cell_type": "markdown",
"id": "21110f70",
"metadata": {},
"source": [
"## Step 4: Create Helper SQL Views"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3189b548",
"metadata": {},
"outputs": [],
"source": [
"print(\"\\nπŸ”„ Creating helper SQL views...\")\n",
"print(\"=\" * 70)\n",
"\n",
"# View 1: Roadmap Upcoming Features\n",
"spark.sql(\"\"\"\n",
" CREATE OR REPLACE VIEW vw_roadmap_upcoming AS\n",
" SELECT \n",
" feature_name,\n",
" feature_description,\n",
" product_name,\n",
" workload,\n",
" release_type,\n",
" release_status,\n",
" release_date,\n",
" is_preview,\n",
" is_planned,\n",
" last_modified,\n",
" CASE \n",
" WHEN release_date IS NULL THEN NULL\n",
" ELSE DATEDIFF(release_date, CURRENT_DATE())\n",
" END as days_until_release\n",
" FROM feature_releases_roadmap\n",
" WHERE is_planned = true\n",
" AND (release_date IS NULL OR release_date >= CURRENT_DATE())\n",
" ORDER BY release_date ASC NULLS LAST, last_modified DESC\n",
"\"\"\")\n",
"print(\"βœ… vw_roadmap_upcoming - Planned/upcoming features\")\n",
"\n",
"# View 2: Active Preview Features\n",
"spark.sql(\"\"\"\n",
" CREATE OR REPLACE VIEW vw_active_preview_features AS\n",
" SELECT \n",
" feature_name,\n",
" workload,\n",
" setting_name,\n",
" days_since_release,\n",
" similarity_score,\n",
" release_date,\n",
" detected_date,\n",
" is_enabled\n",
" FROM preview_features_active\n",
" WHERE is_enabled = true\n",
" ORDER BY detected_date DESC\n",
"\"\"\")\n",
"print(\"βœ… vw_active_preview_features - Currently enabled previews\")\n",
"\n",
"# View 3: Critical Alerts\n",
"spark.sql(\"\"\"\n",
" CREATE OR REPLACE VIEW vw_critical_alerts AS\n",
" SELECT \n",
" alert_id,\n",
" feature_name,\n",
" workload,\n",
" alert_type,\n",
" severity,\n",
" message,\n",
" alert_date,\n",
" acknowledged\n",
" FROM feature_alerts\n",
" WHERE acknowledged = false \n",
" AND severity IN ('Critical', 'Warning')\n",
" ORDER BY \n",
" CASE severity \n",
" WHEN 'Critical' THEN 1 \n",
" WHEN 'Warning' THEN 2 \n",
" ELSE 3 \n",
" END,\n",
" alert_date DESC\n",
"\"\"\")\n",
"print(\"βœ… vw_critical_alerts - Unacknowledged critical/warning alerts\")\n",
"\n",
"# View 4: Feature Release Timeline\n",
"spark.sql(\"\"\"\n",
" CREATE OR REPLACE VIEW vw_feature_timeline AS\n",
" SELECT \n",
" feature_name,\n",
" product_name,\n",
" workload,\n",
" release_type,\n",
" release_status,\n",
" is_preview,\n",
" is_planned,\n",
" is_shipped,\n",
" release_date,\n",
" CASE \n",
" WHEN release_date IS NULL THEN NULL\n",
" ELSE DATEDIFF(CURRENT_DATE(), release_date)\n",
" END as days_since_release,\n",
" last_modified\n",
" FROM feature_releases_roadmap\n",
" ORDER BY release_date DESC NULLS LAST\n",
"\"\"\")\n",
"print(\"βœ… vw_feature_timeline - Complete release timeline\")\n",
"\n",
"print(\"\\nβœ… All SQL views created successfully\")"
]
},
{
"cell_type": "markdown",
"id": "9af215ea",
"metadata": {},
"source": [
"## βœ… Setup Complete!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cb7dc56a",
"metadata": {},
"outputs": [],
"source": [
"print(\"\\n\" + \"=\" * 70)\n",
"print(\"πŸŽ‰ FEATURE TRACKING SETUP COMPLETED!\")\n",
"print(\"=\" * 70)\n",
"\n",
"# Verify tables\n",
"tables = [\"feature_releases_roadmap\", \"preview_features_active\", \"feature_alerts\"]\n",
"print(\"\\nπŸ“‹ Tables created:\")\n",
"for table in tables:\n",
" try:\n",
" count = spark.read.format(\"delta\").table(table).count()\n",
" print(f\" βœ… {table}: {count} rows\")\n",
" except Exception as e:\n",
" print(f\" ❌ {table}: ERROR - {e}\")\n",
"\n",
"# Verify views\n",
"views = [\"vw_roadmap_upcoming\", \"vw_active_preview_features\", \"vw_critical_alerts\", \"vw_feature_timeline\"]\n",
"print(\"\\nπŸ“‹ Views created:\")\n",
"for view in views:\n",
" try:\n",
" spark.sql(f\"SELECT * FROM {view} LIMIT 1\")\n",
" print(f\" βœ… {view}\")\n",
" except Exception as e:\n",
" print(f\" ❌ {view}: ERROR - {e}\")\n",
Comment on lines +290 to +305
Copy link

Copilot AI Nov 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing error handling in verification code. The verification loops on lines 290-295 and 300-305 use bare except Exception as e clauses that catch all exceptions but only print them. If critical tables or views fail to create, the notebook continues without alerting the user that setup is incomplete. Consider either: 1) re-raising critical exceptions after logging, 2) maintaining a count of failures and warning at the end if any failures occurred, or 3) using a success flag that prevents subsequent steps from running if setup failed.

Copilot uses AI. Check for mistakes.
"\n",
"print(\"\\n\" + \"=\" * 70)\n",
"print(\"πŸ“š Next Step:\")\n",
"print(\"=\" * 70)\n",
"print(\"\\n β†’ Run 'Load_Feature_Tracking' notebook to populate the tables\")\n",
"print(\"\\nπŸ’‘ Schedule Load_Feature_Tracking to run daily for continuous monitoring\")\n",
"print(\"\\n\" + \"=\" * 70)"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
{
"$schema": "https://developer.microsoft.com/json-schemas/fabric/gitIntegration/platformProperties/2.0.0/schema.json",
"metadata": {
"type": "Notebook",
"displayName": "Load_Feature_Tracking",
"description": "Complete feature tracking pipeline - fetch releases, detect previews, generate alerts"
},
"config": {
"version": "2.0",
"logicalId": "00000000-0000-0000-0000-000000000000"
}
}
Loading