Skip to content

Conversation

@Keayoub
Copy link

@Keayoub Keayoub commented Nov 21, 2025

Fabric Feature Releases & Preview Tracking (Fabric FRPT)

Overview

This feature adds comprehensive tracking of Microsoft Fabric feature releases, roadmap items, and activated preview features within tenant environments.

Components

Notebooks

  • 01_Setup_Feature_Tracking.Notebook - One-time setup that creates tables and SQL views
  • 02_Load_Feature_Tracking.Notebook - Daily data load from Fabric GPS API, detects active previews, generates alerts

Pipeline

  • Load_Feature_Tracking_E2E.DataPipeline - Orchestrates the daily feature tracking data refresh

Data Model

Tables Created

1. feature_releases_roadmap

Tracks 800+ Microsoft Fabric features from the GPS API.

Column Type Description
feature_id STRING Unique identifier
feature_name STRING Feature display name
feature_description STRING Feature description
workload STRING Fabric workload (e.g., Data Factory, Power BI)
product_name STRING Product name
release_date TIMESTAMP Release or planned date
release_type STRING Type of release
release_status STRING Preview, Planned, Shipped
is_preview BOOLEAN If feature is in preview
is_planned BOOLEAN If feature is planned
is_shipped BOOLEAN If feature is shipped
last_modified TIMESTAMP Last modification date
source_url STRING Official documentation URL
source STRING Data source name
extracted_date TIMESTAMP Last refresh timestamp

2. preview_features_active

Monitors activated preview features in the tenant.

Column Type Description
setting_name STRING Tenant setting name
feature_id STRING Links to feature_releases_roadmap
feature_name STRING Feature name
workload STRING Fabric workload
similarity_score DOUBLE Confidence score (0-1) of name matching
is_enabled BOOLEAN If feature is enabled
delegate_to_tenant BOOLEAN If delegated to tenant level
detected_date TIMESTAMP When feature was detected
release_date TIMESTAMP Feature release date
release_status STRING Release status
source_url STRING Documentation URL
days_since_release INT Days since feature was released

3. feature_alerts

Generates alerts for feature lifecycle events.

Column Type Description
alert_id STRING Unique alert identifier
feature_id STRING Related feature
feature_name STRING Feature name
workload STRING Fabric workload
alert_type STRING New Preview Activated, Long-Running Preview, Low Confidence Match
severity STRING Info, Warning, Critical
message STRING Human-readable alert description
setting_name STRING Related tenant setting
similarity_score DOUBLE Match confidence score
days_since_release INT Days since release
alert_date TIMESTAMP When alert was generated
acknowledged BOOLEAN If alert has been reviewed
acknowledged_date TIMESTAMP When alert was acknowledged
acknowledged_by STRING Who acknowledged the alert

SQL Views

1. vw_roadmap_upcoming

Shows upcoming planned features from the roadmap.

SELECT 
    feature_name,
    feature_description,
    product_name,
    workload,
    release_type,
    release_status,
    release_date,
    is_preview,
    is_planned,
    last_modified,
    CASE 
        WHEN release_date IS NULL THEN NULL
        ELSE DATEDIFF(release_date, CURRENT_DATE())
    END as days_until_release
FROM feature_releases_roadmap
WHERE is_planned = true
  AND (release_date IS NULL OR release_date >= CURRENT_DATE())
ORDER BY release_date ASC NULLS LAST, last_modified DESC

2. vw_active_preview_features

Lists currently enabled preview features with days active.

SELECT 
    feature_name,
    workload,
    setting_name,
    days_since_release,
    similarity_score,
    release_date,
    detected_date,
    is_enabled
FROM preview_features_active
WHERE is_enabled = true
ORDER BY detected_date DESC

3. vw_critical_alerts

Shows unacknowledged critical and warning alerts.

SELECT 
    alert_id,
    feature_name,
    workload,
    alert_type,
    severity,
    message,
    alert_date,
    acknowledged
FROM feature_alerts
WHERE acknowledged = false 
  AND severity IN ('Critical', 'Warning')
ORDER BY 
    CASE severity 
        WHEN 'Critical' THEN 1 
        WHEN 'Warning' THEN 2 
        ELSE 3 
    END,
    alert_date DESC

4. vw_feature_timeline

Complete timeline of feature releases across all statuses.

SELECT 
    feature_name,
    product_name,
    workload,
    release_type,
    release_status,
    is_preview,
    is_planned,
    is_shipped,
    release_date,
    CASE 
        WHEN release_date IS NULL THEN NULL
        ELSE DATEDIFF(CURRENT_DATE(), release_date)
    END as days_since_release,
    last_modified
FROM feature_releases_roadmap
ORDER BY release_date DESC NULLS LAST

Alert Types

Info Alerts

  • New Preview Activated: Newly activated preview feature detected in tenant settings

Warning Alerts

  • Long-Running Preview: Preview feature active for >90 days (may need review for GA transition)

Critical Alerts

  • Low Confidence Match: Feature matched with <50% confidence (manual review recommended)

Usage

Initial Setup

  1. Run 01_Setup_Feature_Tracking.Notebook once to create tables and views

Daily Execution

  1. Schedule Load_Feature_Tracking_E2E.DataPipeline to run daily
    • Or run 02_Load_Feature_Tracking.Notebook directly

Querying Data

View all active preview features:

SELECT * FROM vw_active_preview_features
ORDER BY days_since_release DESC

Check critical alerts:

SELECT * FROM vw_critical_alerts
ORDER BY alert_date DESC

See upcoming roadmap features:

SELECT * FROM vw_roadmap_upcoming 
WHERE days_until_release IS NOT NULL 
  AND days_until_release > 0
ORDER BY days_until_release ASC

Acknowledge an alert:

UPDATE feature_alerts 
SET 
    acknowledged = true,
    acknowledged_date = CURRENT_TIMESTAMP(),
    acknowledged_by = '<user_email>'
WHERE alert_id = '<alert_id>'

Sample Results

======================================================================
📊 FEATURE TRACKING SUMMARY
======================================================================

🔸 Feature Releases Roadmap:
   Total features: 810
   Preview features: 469
   Planned (roadmap): 278
   Shipped: 532

🔸 Activated Preview Features:
   Total activated: 92
   High confidence (≥0.7): 0
   Medium confidence (0.5-0.7): 33
   Low confidence (<0.5): 59

🔸 Alerts Generated:
   New alerts: 208
   Info: 92 | Warning: 57 | Critical: 59

🔸 Top Workloads (by feature count):
+----------------------------+-----+
|workload                    |count|
+----------------------------+-----+
|Real-Time Intelligence      |181  |
|Data Factory                |175  |
|Data Engineering            |88   |
|Fabric Developer Experiences|81   |
|Power BI                    |78   |
+----------------------------+-----+

Configuration Parameters

API Configuration

fabric_gps_api_url = "https://fabric-gps.com/api/releases"
modified_within_days = 90  # Fetch features modified within X days
page_size = 200  # API page size
include_planned = True  # Include planned features
include_shipped = True  # Include shipped features

Alert Thresholds

ALERT_DAYS_THRESHOLD = 90  # Alert if preview active >90 days
LOW_CONFIDENCE_THRESHOLD = 0.5  # Alert if similarity score <0.5
SIMILARITY_MATCH_THRESHOLD = 0.3  # Minimum similarity to consider a match

Alert Severity Levels

SEVERITY_INFO = "Info"
SEVERITY_WARNING = "Warning"
SEVERITY_CRITICAL = "Critical"

Integration

  • Data Source: Fabric GPS API (https://fabric-gps.com/api/releases)
  • Lakehouse: FUAM_Lakehouse
  • Dependencies:
    • tenant_settings table (from FUAM)
    • Standard Python libraries: requests, difflib, datetime
    • PySpark for data processing

Technical Details

Fuzzy Matching Algorithm

Uses Python's difflib.SequenceMatcher to correlate tenant settings with preview features:

  • Normalizes names (lowercase comparison)
  • Calculates similarity ratio (0-1)
  • Boosts score for common words between setting and feature names
  • Matches above 0.3 threshold are considered potential activations

Data Refresh Strategy

  • Feature releases: MERGE (upsert) based on feature_id and last_modified
  • Active previews: MERGE (upsert) based on feature_id and setting_name
  • Alerts: APPEND - only generates new alerts for changes (deduplicates based on feature_id + alert_type)

Lakehouse Binding

All notebooks use the FUAM standard lakehouse binding:

%%configure -f
{ 
    "defaultLakehouse": { 
        "name": "FUAM_Lakehouse"
    }
}

Files Added/Modified

New Files

  • monitoring/fabric-unified-admin-monitoring/src/01_Setup_Feature_Tracking.Notebook
  • monitoring/fabric-unified-admin-monitoring/src/02_Load_Feature_Tracking.Notebook
  • monitoring/fabric-unified-admin-monitoring/src/Load_Feature_Tracking_E2E.DataPipeline

Modified Files

  • monitoring/fabric-unified-admin-monitoring/config/deployment_order.json

Benefits

Visibility: Track 800+ Fabric features across all workloads
Compliance: Monitor which preview features are activated in tenant
Planning: View upcoming roadmap features with release dates
Risk Management: Alerts for long-running or uncertain preview activations
Automation: Daily refresh keeps data current without manual intervention
Integration: Works seamlessly with existing FUAM infrastructure
Governance: Historical tracking of feature activations and alerts

Troubleshooting

Views Not Found

Issue: Invalid object name 'vw_critical_alerts'
Solution: Run 01_Setup_Feature_Tracking.Notebook to create views

No Active Previews Detected

Issue: preview_features_active table is empty
Solution: Check that tenant_settings table is populated and contains enabled features

Low Similarity Scores

Issue: Many critical alerts for low confidence matches
Solution:

  • Review the matches manually
  • Adjust SIMILARITY_MATCH_THRESHOLD parameter
  • Update feature names in GPS API if incorrect

API Connection Issues

Issue: Error fetching from Fabric GPS API
Solution:

  • Check internet connectivity
  • Verify API URL is accessible
  • Check for API rate limiting

Support

For issues or questions, please refer to the FUAM documentation or create an issue in the repository.

Keayoub and others added 6 commits November 19, 2025 14:40
- Created a new pipeline `Load_Feature_Tracking_E2E` for tracking feature releases, preview features, and alerts.
- Added `Setup_Feature_Tracking_Tables` notebook for one-time setup of Delta tables.
- Implemented `Setup_Feature_Tracking_Tables_GpsApi` notebook to enhance feature tracking with roadmap data.
- Defined schemas and created Delta tables: `feature_releases`, `preview_features_active`, `feature_alerts`, and `feature_releases_roadmap`.
- Added helper views for SQL querying: `vw_active_preview_features`, `vw_critical_alerts`, `vw_feature_timeline`, and `vw_roadmap_upcoming`.
- Included verification steps to ensure tables and views are created successfully.
…cking pipeline

- Created a new notebook for Load Feature Tracking with complete feature tracking pipeline.
- Implemented API calls to fetch feature releases from Fabric GPS API.
- Transformed API data to a defined schema for further processing.
- Added functionality to write feature releases to Delta Lake.
- Implemented detection of activated preview features and mapping to tenant settings.
- Generated alerts based on business rules for new previews, long-running previews, and low confidence matches.
- Summarized and displayed statistics for feature tracking, activated previews, and generated alerts.
- Created a new notebook for the Load Feature Tracking process, which includes fetching releases, detecting previews, and generating alerts.
- Implemented the necessary code to transform and write feature release data to Delta tables.
- Added a new Data Pipeline to orchestrate the execution of the Load Feature Tracking notebook.
- Configured the pipeline with appropriate parameters and dependencies for seamless execution.
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a comprehensive Fabric Feature Releases & Preview Tracking (FRPT) system that monitors Microsoft Fabric feature releases, roadmap items, and activated preview features within tenant environments. The solution fetches data from the Fabric GPS API, uses fuzzy matching to correlate tenant settings with preview features, and generates alerts for important feature lifecycle events.

Key Changes:

  • Added two new notebooks for setup and data loading with complete ETL pipeline for 800+ Fabric features
  • Implemented fuzzy matching algorithm to detect activated preview features with configurable similarity thresholds
  • Created automated alert generation system for new previews, long-running previews (>90 days), and low-confidence matches
  • Added data pipeline orchestration and integrated into existing FUAM deployment configuration

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 16 comments.

Show a summary per file
File Description
01_Setup_Feature_Tracking.Notebook One-time setup creating three Delta tables and four SQL views for feature tracking
02_Load_Feature_Tracking.Notebook Main ETL logic fetching from Fabric GPS API, performing fuzzy matching, and generating alerts
Load_Feature_Tracking_E2E.DataPipeline Pipeline orchestration invoking the load notebook with configurable session tags
deployment_order.json Updated deployment sequence adding three new FUAM artifacts with unique identifiers
README.md Documentation update adding feature tracking to the list of extracted data sources

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

"outputs": [],
"source": [
"# API Configuration\n",
"fabric_gps_api_url = \"https://fabric-gps.com/api/releases\"\n",
Copy link

Copilot AI Nov 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Hardcoded API URL without configuration flexibility. The URL https://fabric-gps.com/api/releases is hardcoded as a default parameter value. If this API endpoint changes or needs to be overridden for testing/different environments, users would need to modify the notebook code. Consider making this configurable through environment variables or a configuration file.

Copilot uses AI. Check for mistakes.
" try:\n",
" params = {\"page\": page, \"page_size\": page_size}\n",
" \n",
" if modified_within_days and modified_within_days <= 30:\n",
Copy link

Copilot AI Nov 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Magic number without explanation. The value 30 on line 86 is used to check modified_within_days <= 30, and on line 130 the same condition is repeated. This appears to be an API constraint but isn't documented in the code. Consider adding a comment explaining why 30 is the threshold, or defining it as a named constant (e.g., MAX_MODIFIED_WITHIN_DAYS = 30) to improve code clarity and maintainability.

Copilot uses AI. Check for mistakes.
Comment on lines +290 to +305
"for table in tables:\n",
" try:\n",
" count = spark.read.format(\"delta\").table(table).count()\n",
" print(f\" ✅ {table}: {count} rows\")\n",
" except Exception as e:\n",
" print(f\" ❌ {table}: ERROR - {e}\")\n",
"\n",
"# Verify views\n",
"views = [\"vw_roadmap_upcoming\", \"vw_active_preview_features\", \"vw_critical_alerts\", \"vw_feature_timeline\"]\n",
"print(\"\\n📋 Views created:\")\n",
"for view in views:\n",
" try:\n",
" spark.sql(f\"SELECT * FROM {view} LIMIT 1\")\n",
" print(f\" ✅ {view}\")\n",
" except Exception as e:\n",
" print(f\" ❌ {view}: ERROR - {e}\")\n",
Copy link

Copilot AI Nov 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing error handling in verification code. The verification loops on lines 290-295 and 300-305 use bare except Exception as e clauses that catch all exceptions but only print them. If critical tables or views fail to create, the notebook continues without alerting the user that setup is incomplete. Consider either: 1) re-raising critical exceptions after logging, 2) maintaining a count of failures and warning at the end if any failures occurred, or 3) using a success flag that prevents subsequent steps from running if setup failed.

Copilot uses AI. Check for mistakes.
Comment on lines +171 to +177
" except:\n",
" release_date = None\n",
" \n",
" try:\n",
" last_modified_str = release.get(\"last_modified\")\n",
" last_modified = datetime.strptime(last_modified_str, \"%Y-%m-%d\") if last_modified_str else datetime.now()\n",
" except:\n",
Copy link

Copilot AI Nov 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing bare exception handler specification. Using a bare except: clause (lines 171 and 177) catches all exceptions including system exits and keyboard interrupts, which is generally not recommended. Consider catching specific exception types (e.g., ValueError, TypeError) or at minimum use except Exception: to avoid catching BaseException subclasses like SystemExit and KeyboardInterrupt.

Suggested change
" except:\n",
" release_date = None\n",
" \n",
" try:\n",
" last_modified_str = release.get(\"last_modified\")\n",
" last_modified = datetime.strptime(last_modified_str, \"%Y-%m-%d\") if last_modified_str else datetime.now()\n",
" except:\n",
" except Exception:\n",
" release_date = None\n",
" \n",
" try:\n",
" last_modified_str = release.get(\"last_modified\")\n",
" last_modified = datetime.strptime(last_modified_str, \"%Y-%m-%d\") if last_modified_str else datetime.now()\n",
" except Exception:\n",

Copilot uses AI. Check for mistakes.
Comment on lines +171 to +177
" except:\n",
" release_date = None\n",
" \n",
" try:\n",
" last_modified_str = release.get(\"last_modified\")\n",
" last_modified = datetime.strptime(last_modified_str, \"%Y-%m-%d\") if last_modified_str else datetime.now()\n",
" except:\n",
Copy link

Copilot AI Nov 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing bare exception handler specification. Using a bare except: clause catches all exceptions including system exits and keyboard interrupts, which is generally not recommended. Consider catching specific exception types (e.g., ValueError, TypeError) or at minimum use except Exception: to avoid catching BaseException subclasses like SystemExit and KeyboardInterrupt.

Suggested change
" except:\n",
" release_date = None\n",
" \n",
" try:\n",
" last_modified_str = release.get(\"last_modified\")\n",
" last_modified = datetime.strptime(last_modified_str, \"%Y-%m-%d\") if last_modified_str else datetime.now()\n",
" except:\n",
" except (ValueError, TypeError):\n",
" release_date = None\n",
" \n",
" try:\n",
" last_modified_str = release.get(\"last_modified\")\n",
" last_modified = datetime.strptime(last_modified_str, \"%Y-%m-%d\") if last_modified_str else datetime.now()\n",
" except (ValueError, TypeError):\n",

Copilot uses AI. Check for mistakes.
" common_words = setting_words & feature_words\n",
" \n",
" if common_words:\n",
" score += len(common_words) * 0.1\n",
Copy link

Copilot AI Nov 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent similarity score capping. On line 390, the similarity score is boosted by len(common_words) * 0.1, which could theoretically push the score above 1.0 if there are many common words. While the SequenceMatcher.ratio() returns a value between 0-1, the boost could make the final score exceed this range. Consider capping the final score at 1.0 using min(score, 1.0) after the boost to maintain consistency with the expected 0-1 range documented throughout the code.

Suggested change
" score += len(common_words) * 0.1\n",
" score += len(common_words) * 0.1\n",
" score = min(score, 1.0)\n",

Copilot uses AI. Check for mistakes.
Comment on lines +369 to +394
" for setting in settings_list:\n",
" setting_name = setting[\"settingName\"]\n",
" \n",
" if not setting[\"enabled\"]:\n",
" continue\n",
" \n",
" best_match = None\n",
" best_score = 0.0\n",
" \n",
" for feature in features_list:\n",
" feature_name = feature[\"feature_name\"]\n",
" \n",
" # Calculate similarity\n",
" score = similarity_score(setting_name, feature_name)\n",
" \n",
" # Boost score for common words\n",
" setting_words = set(setting_name.lower().split())\n",
" feature_words = set(feature_name.lower().split())\n",
" common_words = setting_words & feature_words\n",
" \n",
" if common_words:\n",
" score += len(common_words) * 0.1\n",
" \n",
" if score > best_score and score > threshold:\n",
" best_score = score\n",
" best_match = feature\n",
Copy link

Copilot AI Nov 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

N+1 query pattern in similarity matching. The nested loop on lines 369-394 iterates through all settings (outer loop) and for each setting, iterates through all features (inner loop). This results in O(n*m) complexity where n=number of settings and m=number of features. For 800+ features and potentially hundreds of settings, this could be slow. Consider optimizing this algorithm, perhaps by using vectorized operations, creating an index, or using a more efficient matching algorithm.

Copilot uses AI. Check for mistakes.
" common_words = setting_words & feature_words\n",
" \n",
" if common_words:\n",
" score += len(common_words) * 0.1\n",
Copy link

Copilot AI Nov 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Magic number in similarity scoring boost. On line 390, the value 0.1 is used to boost the similarity score based on common words. This magic number affects the matching algorithm's behavior but lacks explanation. Consider defining this as a named constant (e.g., COMMON_WORD_BOOST = 0.1) and adding a comment explaining why this specific value was chosen, to improve maintainability and make tuning easier.

Copilot uses AI. Check for mistakes.
Comment on lines +15 to +16
"typeProperties": {
"notebookId": "REPLACE_WITH_NOTEBOOK_ID",
Copy link

Copilot AI Nov 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Placeholder values require manual replacement. The pipeline configuration contains "notebookId": "REPLACE_WITH_NOTEBOOK_ID" and "workspaceId": "REPLACE_WITH_WORKSPACE_ID" on lines 16-17. These placeholder values will cause the pipeline to fail if deployed without manual intervention. Consider adding documentation about replacing these values, or implementing an automated deployment script that populates these values dynamically.

Suggested change
"typeProperties": {
"notebookId": "REPLACE_WITH_NOTEBOOK_ID",
"typeProperties": {
// TODO: Replace the placeholder below with the actual Notebook ID before deployment.
"notebookId": "REPLACE_WITH_NOTEBOOK_ID",
// TODO: Replace the placeholder below with the actual Workspace ID before deployment.

Copilot uses AI. Check for mistakes.
Comment on lines +536 to +539
" alerted_combos = set([\n",
" (row[\"feature_id\"], row[\"alert_type\"]) \n",
" for row in df_historical.select(\"feature_id\", \"alert_type\").distinct().collect()\n",
" ])\n",
Copy link

Copilot AI Nov 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential bug in alert deduplication logic. The deduplication on lines 536-539 only considers (feature_id, alert_type) combinations. However, for the "Low Confidence Match" alert type, the same feature could be matched to multiple different settings with low confidence scores. This logic would only alert once for the first low-confidence match and ignore subsequent ones for the same feature, even if they involve different settings. Consider including setting_name in the deduplication key for this alert type: (feature_id, alert_type, setting_name) for low confidence matches.

Copilot uses AI. Check for mistakes.
@ggintli
Copy link
Collaborator

ggintli commented Nov 27, 2025

Thank you very much @Keayoub for this contribution!

It looks very good.
Please, let us test on our side this module, because we also prepare the next FUAM release, I would roll out this module together with the new release.

Thank you for your patience!
Best Regards,
ggintli

@ggintli ggintli added FUAM Fabric Unified Admin Monitoring Solution Accelerator enhancement New feature or request labels Nov 27, 2025
@Keayoub
Copy link
Author

Keayoub commented Dec 1, 2025

Hello @ggintli, thanks a lot for letting me include this change. it coming from many customers asking about it. and I start implementing it. I'm also working on enhancing by the reviews and recommendations that you mentioned.

@Keayoub
Copy link
Author

Keayoub commented Dec 1, 2025

@copilot open a new pull request to apply changes based on the comments in this thread

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request FUAM Fabric Unified Admin Monitoring Solution Accelerator

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants