-
Notifications
You must be signed in to change notification settings - Fork 248
Frpt feature releases preview tracking #319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Frpt feature releases preview tracking #319
Conversation
- Created a new pipeline `Load_Feature_Tracking_E2E` for tracking feature releases, preview features, and alerts. - Added `Setup_Feature_Tracking_Tables` notebook for one-time setup of Delta tables. - Implemented `Setup_Feature_Tracking_Tables_GpsApi` notebook to enhance feature tracking with roadmap data. - Defined schemas and created Delta tables: `feature_releases`, `preview_features_active`, `feature_alerts`, and `feature_releases_roadmap`. - Added helper views for SQL querying: `vw_active_preview_features`, `vw_critical_alerts`, `vw_feature_timeline`, and `vw_roadmap_upcoming`. - Included verification steps to ensure tables and views are created successfully.
…cking pipeline - Created a new notebook for Load Feature Tracking with complete feature tracking pipeline. - Implemented API calls to fetch feature releases from Fabric GPS API. - Transformed API data to a defined schema for further processing. - Added functionality to write feature releases to Delta Lake. - Implemented detection of activated preview features and mapping to tenant settings. - Generated alerts based on business rules for new previews, long-running previews, and low confidence matches. - Summarized and displayed statistics for feature tracking, activated previews, and generated alerts.
…ub.com/Keayoub/fabric-toolbox into FRPT-Feature-Releases-Preview-Tracking
- Created a new notebook for the Load Feature Tracking process, which includes fetching releases, detecting previews, and generating alerts. - Implemented the necessary code to transform and write feature release data to Delta tables. - Added a new Data Pipeline to orchestrate the execution of the Load Feature Tracking notebook. - Configured the pipeline with appropriate parameters and dependencies for seamless execution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR introduces a comprehensive Fabric Feature Releases & Preview Tracking (FRPT) system that monitors Microsoft Fabric feature releases, roadmap items, and activated preview features within tenant environments. The solution fetches data from the Fabric GPS API, uses fuzzy matching to correlate tenant settings with preview features, and generates alerts for important feature lifecycle events.
Key Changes:
- Added two new notebooks for setup and data loading with complete ETL pipeline for 800+ Fabric features
- Implemented fuzzy matching algorithm to detect activated preview features with configurable similarity thresholds
- Created automated alert generation system for new previews, long-running previews (>90 days), and low-confidence matches
- Added data pipeline orchestration and integrated into existing FUAM deployment configuration
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 16 comments.
Show a summary per file
| File | Description |
|---|---|
01_Setup_Feature_Tracking.Notebook |
One-time setup creating three Delta tables and four SQL views for feature tracking |
02_Load_Feature_Tracking.Notebook |
Main ETL logic fetching from Fabric GPS API, performing fuzzy matching, and generating alerts |
Load_Feature_Tracking_E2E.DataPipeline |
Pipeline orchestration invoking the load notebook with configurable session tags |
deployment_order.json |
Updated deployment sequence adding three new FUAM artifacts with unique identifiers |
README.md |
Documentation update adding feature tracking to the list of extracted data sources |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "outputs": [], | ||
| "source": [ | ||
| "# API Configuration\n", | ||
| "fabric_gps_api_url = \"https://fabric-gps.com/api/releases\"\n", |
Copilot
AI
Nov 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] Hardcoded API URL without configuration flexibility. The URL https://fabric-gps.com/api/releases is hardcoded as a default parameter value. If this API endpoint changes or needs to be overridden for testing/different environments, users would need to modify the notebook code. Consider making this configurable through environment variables or a configuration file.
| " try:\n", | ||
| " params = {\"page\": page, \"page_size\": page_size}\n", | ||
| " \n", | ||
| " if modified_within_days and modified_within_days <= 30:\n", |
Copilot
AI
Nov 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Magic number without explanation. The value 30 on line 86 is used to check modified_within_days <= 30, and on line 130 the same condition is repeated. This appears to be an API constraint but isn't documented in the code. Consider adding a comment explaining why 30 is the threshold, or defining it as a named constant (e.g., MAX_MODIFIED_WITHIN_DAYS = 30) to improve code clarity and maintainability.
| "for table in tables:\n", | ||
| " try:\n", | ||
| " count = spark.read.format(\"delta\").table(table).count()\n", | ||
| " print(f\" ✅ {table}: {count} rows\")\n", | ||
| " except Exception as e:\n", | ||
| " print(f\" ❌ {table}: ERROR - {e}\")\n", | ||
| "\n", | ||
| "# Verify views\n", | ||
| "views = [\"vw_roadmap_upcoming\", \"vw_active_preview_features\", \"vw_critical_alerts\", \"vw_feature_timeline\"]\n", | ||
| "print(\"\\n📋 Views created:\")\n", | ||
| "for view in views:\n", | ||
| " try:\n", | ||
| " spark.sql(f\"SELECT * FROM {view} LIMIT 1\")\n", | ||
| " print(f\" ✅ {view}\")\n", | ||
| " except Exception as e:\n", | ||
| " print(f\" ❌ {view}: ERROR - {e}\")\n", |
Copilot
AI
Nov 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing error handling in verification code. The verification loops on lines 290-295 and 300-305 use bare except Exception as e clauses that catch all exceptions but only print them. If critical tables or views fail to create, the notebook continues without alerting the user that setup is incomplete. Consider either: 1) re-raising critical exceptions after logging, 2) maintaining a count of failures and warning at the end if any failures occurred, or 3) using a success flag that prevents subsequent steps from running if setup failed.
| " except:\n", | ||
| " release_date = None\n", | ||
| " \n", | ||
| " try:\n", | ||
| " last_modified_str = release.get(\"last_modified\")\n", | ||
| " last_modified = datetime.strptime(last_modified_str, \"%Y-%m-%d\") if last_modified_str else datetime.now()\n", | ||
| " except:\n", |
Copilot
AI
Nov 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing bare exception handler specification. Using a bare except: clause (lines 171 and 177) catches all exceptions including system exits and keyboard interrupts, which is generally not recommended. Consider catching specific exception types (e.g., ValueError, TypeError) or at minimum use except Exception: to avoid catching BaseException subclasses like SystemExit and KeyboardInterrupt.
| " except:\n", | |
| " release_date = None\n", | |
| " \n", | |
| " try:\n", | |
| " last_modified_str = release.get(\"last_modified\")\n", | |
| " last_modified = datetime.strptime(last_modified_str, \"%Y-%m-%d\") if last_modified_str else datetime.now()\n", | |
| " except:\n", | |
| " except Exception:\n", | |
| " release_date = None\n", | |
| " \n", | |
| " try:\n", | |
| " last_modified_str = release.get(\"last_modified\")\n", | |
| " last_modified = datetime.strptime(last_modified_str, \"%Y-%m-%d\") if last_modified_str else datetime.now()\n", | |
| " except Exception:\n", |
| " except:\n", | ||
| " release_date = None\n", | ||
| " \n", | ||
| " try:\n", | ||
| " last_modified_str = release.get(\"last_modified\")\n", | ||
| " last_modified = datetime.strptime(last_modified_str, \"%Y-%m-%d\") if last_modified_str else datetime.now()\n", | ||
| " except:\n", |
Copilot
AI
Nov 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing bare exception handler specification. Using a bare except: clause catches all exceptions including system exits and keyboard interrupts, which is generally not recommended. Consider catching specific exception types (e.g., ValueError, TypeError) or at minimum use except Exception: to avoid catching BaseException subclasses like SystemExit and KeyboardInterrupt.
| " except:\n", | |
| " release_date = None\n", | |
| " \n", | |
| " try:\n", | |
| " last_modified_str = release.get(\"last_modified\")\n", | |
| " last_modified = datetime.strptime(last_modified_str, \"%Y-%m-%d\") if last_modified_str else datetime.now()\n", | |
| " except:\n", | |
| " except (ValueError, TypeError):\n", | |
| " release_date = None\n", | |
| " \n", | |
| " try:\n", | |
| " last_modified_str = release.get(\"last_modified\")\n", | |
| " last_modified = datetime.strptime(last_modified_str, \"%Y-%m-%d\") if last_modified_str else datetime.now()\n", | |
| " except (ValueError, TypeError):\n", |
| " common_words = setting_words & feature_words\n", | ||
| " \n", | ||
| " if common_words:\n", | ||
| " score += len(common_words) * 0.1\n", |
Copilot
AI
Nov 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent similarity score capping. On line 390, the similarity score is boosted by len(common_words) * 0.1, which could theoretically push the score above 1.0 if there are many common words. While the SequenceMatcher.ratio() returns a value between 0-1, the boost could make the final score exceed this range. Consider capping the final score at 1.0 using min(score, 1.0) after the boost to maintain consistency with the expected 0-1 range documented throughout the code.
| " score += len(common_words) * 0.1\n", | |
| " score += len(common_words) * 0.1\n", | |
| " score = min(score, 1.0)\n", |
| " for setting in settings_list:\n", | ||
| " setting_name = setting[\"settingName\"]\n", | ||
| " \n", | ||
| " if not setting[\"enabled\"]:\n", | ||
| " continue\n", | ||
| " \n", | ||
| " best_match = None\n", | ||
| " best_score = 0.0\n", | ||
| " \n", | ||
| " for feature in features_list:\n", | ||
| " feature_name = feature[\"feature_name\"]\n", | ||
| " \n", | ||
| " # Calculate similarity\n", | ||
| " score = similarity_score(setting_name, feature_name)\n", | ||
| " \n", | ||
| " # Boost score for common words\n", | ||
| " setting_words = set(setting_name.lower().split())\n", | ||
| " feature_words = set(feature_name.lower().split())\n", | ||
| " common_words = setting_words & feature_words\n", | ||
| " \n", | ||
| " if common_words:\n", | ||
| " score += len(common_words) * 0.1\n", | ||
| " \n", | ||
| " if score > best_score and score > threshold:\n", | ||
| " best_score = score\n", | ||
| " best_match = feature\n", |
Copilot
AI
Nov 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
N+1 query pattern in similarity matching. The nested loop on lines 369-394 iterates through all settings (outer loop) and for each setting, iterates through all features (inner loop). This results in O(n*m) complexity where n=number of settings and m=number of features. For 800+ features and potentially hundreds of settings, this could be slow. Consider optimizing this algorithm, perhaps by using vectorized operations, creating an index, or using a more efficient matching algorithm.
| " common_words = setting_words & feature_words\n", | ||
| " \n", | ||
| " if common_words:\n", | ||
| " score += len(common_words) * 0.1\n", |
Copilot
AI
Nov 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Magic number in similarity scoring boost. On line 390, the value 0.1 is used to boost the similarity score based on common words. This magic number affects the matching algorithm's behavior but lacks explanation. Consider defining this as a named constant (e.g., COMMON_WORD_BOOST = 0.1) and adding a comment explaining why this specific value was chosen, to improve maintainability and make tuning easier.
| "typeProperties": { | ||
| "notebookId": "REPLACE_WITH_NOTEBOOK_ID", |
Copilot
AI
Nov 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Placeholder values require manual replacement. The pipeline configuration contains "notebookId": "REPLACE_WITH_NOTEBOOK_ID" and "workspaceId": "REPLACE_WITH_WORKSPACE_ID" on lines 16-17. These placeholder values will cause the pipeline to fail if deployed without manual intervention. Consider adding documentation about replacing these values, or implementing an automated deployment script that populates these values dynamically.
| "typeProperties": { | |
| "notebookId": "REPLACE_WITH_NOTEBOOK_ID", | |
| "typeProperties": { | |
| // TODO: Replace the placeholder below with the actual Notebook ID before deployment. | |
| "notebookId": "REPLACE_WITH_NOTEBOOK_ID", | |
| // TODO: Replace the placeholder below with the actual Workspace ID before deployment. |
| " alerted_combos = set([\n", | ||
| " (row[\"feature_id\"], row[\"alert_type\"]) \n", | ||
| " for row in df_historical.select(\"feature_id\", \"alert_type\").distinct().collect()\n", | ||
| " ])\n", |
Copilot
AI
Nov 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential bug in alert deduplication logic. The deduplication on lines 536-539 only considers (feature_id, alert_type) combinations. However, for the "Low Confidence Match" alert type, the same feature could be matched to multiple different settings with low confidence scores. This logic would only alert once for the first low-confidence match and ignore subsequent ones for the same feature, even if they involve different settings. Consider including setting_name in the deduplication key for this alert type: (feature_id, alert_type, setting_name) for low confidence matches.
|
Thank you very much @Keayoub for this contribution! It looks very good. Thank you for your patience! |
|
Hello @ggintli, thanks a lot for letting me include this change. it coming from many customers asking about it. and I start implementing it. I'm also working on enhancing by the reviews and recommendations that you mentioned. |
|
@copilot open a new pull request to apply changes based on the comments in this thread |
Fabric Feature Releases & Preview Tracking (Fabric FRPT)
Overview
This feature adds comprehensive tracking of Microsoft Fabric feature releases, roadmap items, and activated preview features within tenant environments.
Components
Notebooks
01_Setup_Feature_Tracking.Notebook- One-time setup that creates tables and SQL views02_Load_Feature_Tracking.Notebook- Daily data load from Fabric GPS API, detects active previews, generates alertsPipeline
Load_Feature_Tracking_E2E.DataPipeline- Orchestrates the daily feature tracking data refreshData Model
Tables Created
1.
feature_releases_roadmapTracks 800+ Microsoft Fabric features from the GPS API.
2.
preview_features_activeMonitors activated preview features in the tenant.
3.
feature_alertsGenerates alerts for feature lifecycle events.
SQL Views
1.
vw_roadmap_upcomingShows upcoming planned features from the roadmap.
2.
vw_active_preview_featuresLists currently enabled preview features with days active.
3.
vw_critical_alertsShows unacknowledged critical and warning alerts.
4.
vw_feature_timelineComplete timeline of feature releases across all statuses.
Alert Types
Info Alerts
Warning Alerts
Critical Alerts
Usage
Initial Setup
01_Setup_Feature_Tracking.Notebookonce to create tables and viewsDaily Execution
Load_Feature_Tracking_E2E.DataPipelineto run daily02_Load_Feature_Tracking.NotebookdirectlyQuerying Data
View all active preview features:
Check critical alerts:
See upcoming roadmap features:
Acknowledge an alert:
Sample Results
Configuration Parameters
API Configuration
Alert Thresholds
Alert Severity Levels
Integration
https://fabric-gps.com/api/releases)FUAM_Lakehousetenant_settingstable (from FUAM)requests,difflib,datetimeTechnical Details
Fuzzy Matching Algorithm
Uses Python's
difflib.SequenceMatcherto correlate tenant settings with preview features:Data Refresh Strategy
feature_idandlast_modifiedfeature_idandsetting_namefeature_id+alert_type)Lakehouse Binding
All notebooks use the FUAM standard lakehouse binding:
Files Added/Modified
New Files
monitoring/fabric-unified-admin-monitoring/src/01_Setup_Feature_Tracking.Notebookmonitoring/fabric-unified-admin-monitoring/src/02_Load_Feature_Tracking.Notebookmonitoring/fabric-unified-admin-monitoring/src/Load_Feature_Tracking_E2E.DataPipelineModified Files
monitoring/fabric-unified-admin-monitoring/config/deployment_order.jsonBenefits
✅ Visibility: Track 800+ Fabric features across all workloads
✅ Compliance: Monitor which preview features are activated in tenant
✅ Planning: View upcoming roadmap features with release dates
✅ Risk Management: Alerts for long-running or uncertain preview activations
✅ Automation: Daily refresh keeps data current without manual intervention
✅ Integration: Works seamlessly with existing FUAM infrastructure
✅ Governance: Historical tracking of feature activations and alerts
Troubleshooting
Views Not Found
Issue:
Invalid object name 'vw_critical_alerts'Solution: Run
01_Setup_Feature_Tracking.Notebookto create viewsNo Active Previews Detected
Issue:
preview_features_activetable is emptySolution: Check that
tenant_settingstable is populated and contains enabled featuresLow Similarity Scores
Issue: Many critical alerts for low confidence matches
Solution:
SIMILARITY_MATCH_THRESHOLDparameterAPI Connection Issues
Issue: Error fetching from Fabric GPS API
Solution:
Support
For issues or questions, please refer to the FUAM documentation or create an issue in the repository.