This directory serves as a centralized collection of utility notebooks. Unlike production code, these notebooks address specific one-time or occasional administrative needs that arise when supporting Fiddler deployments. Field engineers often develop scripts to solve unique challenges that may not warrant inclusion in the main product but are invaluable when similar situations arise with other customers.
- Fiddler instance URL and valid API token
- Administrative access to view / edit (may not be needed for every tool)
When adding a new notebook:
- Include clear prerequisites
- Document any conditional logic
- Add a summary to this README
- Ensure sensitive information (tokens, URLs) is not committed
A utility for analyzing and extracting hierarchical information about a Fiddler environment:
- Collects comprehensive statistics on projects, models, and features
- Captures model creation and update timestamps (
created_at,updated_at) - Generates summary metrics like average models per project and features per model
- Creates exportable dataframes for further analysis
- Visualizes the top models by feature count
- Provides temporal insights including newest/oldest models and recent updates
Outputs:
- Provides high-level environment statistics (totals, averages, etc.)
- Shows ranked lists of projects by model count and models by feature count
- Enhanced statistics including timestamp analysis (date ranges, update patterns, temporal coverage)
env_stats__overview.csv- Main export table with project, model, version_name, created_at, updated_atenv_stats__flattened_hierarchy.csv- Detailed feature-level data enriched with timestamps
A comprehensive utility for modifying alert thresholds across all models and projects. Specifically:
- Identifies and fixes percentage-based data integrity alerts with incorrect threshold scales
- Scans all projects and models for alerts with percentage metrics
- Converts thresholds from decimal format (0-1) to percentage format (0-100)
- Provides detailed logging and summary statistics
Prerequisites: Administrative access to modify alerts
Execution logic:
- Only modifies alerts with raw value comparisons (skips time-period comparison alerts)
- Only updates thresholds that are between 0-1 (already assumes these are improperly scaled)
- Handles both critical and warning thresholds
A utility for uploading pre-calculated feature importance values to a Fiddler model:
- Loads feature importance values from a CSV file
- Validates features against model input schema
- Uploads the values to a specified model and version
Prerequisites:
- CSV with columns 'feature_name' and 'feature_imp'
- Project name, model name, and version
Validation logic:
- Checks for missing features (model inputs not in CSV)
- Checks for extra features (CSV features not in model)
- Removes extra features before upload to prevent errors
A comprehensive testing utility to verify Fiddler's alert functionality and email delivery:
- Creates a test project and model with a synthetic customer churn dataset
- Configures all available Fiddler alert types with appropriate thresholds
- Generates and publishes events that intentionally violate alert rules
- Tests email delivery through corporate networks/firewalls
Prerequisites:
- Email address(es) for receiving test alerts
Alert types demonstrated:
- Data Integrity (range, null, and type violations)
- Data Drift (distribution shift via Jensen-Shannon Distance)
- Traffic (low volume detection)
- Performance (model precision degradation)
- Custom Metric (business-relevant metrics like revenue loss)
Usage notes:
- Includes troubleshooting tips for email delivery issues
- Creates a complete self-contained testing environment
- Publishes violation events in batches to avoid timeouts
A utility notebook for copying segment definitions between model versions:
- Retrieves all segments from a source model version
- Creates identical segments in a target model version
- Preserves segment names, descriptions, and filter definitions
- Handles bulk transfer of multiple segments
Prerequisites:
- Source and target model versions must exist
- Fiddler URL and valid API token
- Administrative access to create segments
Usage notes:
- Useful when promoting models to new versions
- Maintains segment configuration consistency across versions
- Automates what would otherwise be manual segment recreation
A utility script for modifying immutable alert properties across a Fiddler deployment:
- Programmatically converts monthly bin alerts to daily bin alerts across all projects and models
- Handles immutable alert properties by creating new alerts with modified parameters
- Preserves all other alert configuration details during conversion
- Provides comprehensive logging and progress tracking
Prerequisites:
- Fiddler URL and valid API token
- Administrative access to create and delete alerts
Key features:
- Traverses entire Fiddler hierarchy (projects → models → alerts)
- Creates duplicate alerts with "_FIXED" suffix in name and modified bin size
- Deletes original alerts after successful duplication
- Includes robust error handling and progress tracking with tqdm
- Detailed logging with separate app and Fiddler library log files
Usage notes:
- Addresses the limitation that certain alert attributes (like bin_size) cannot be modified after creation
- Can be adapted for other immutable property changes beyond bin size
- Useful for batch modifications that would otherwise require manual recreation of alerts
- Maintains alert continuity by creating replacement alerts before deleting originals
A utility for creating and organizing dashboards using existing charts:
- Creates dashboards with specific chart arrangements using Fiddler's API
- Positions charts in a customizable grid layout
- Maps chart titles to their UUIDs for proper dashboard configuration
- Sets the created dashboard as the default for a model
Prerequisites:
- Existing model with pre-created charts
- Fiddler URL and valid API token
Key features:
- Supports precise positioning of charts in a grid (position_x, position_y, width, height)
- Allows export/import of dashboard configurations via YAML
- Converts chart titles to the required UUID format automatically
- Sets time filters and timezone preferences
Usage notes:
- Useful for standardizing dashboards across similar models
- Provides programmatic control over dashboard layout
- Can be adapted for bulk dashboard creation
A utility script for validating consistency between model schema and specification:
- Compares columns defined in the model schema with those referenced in the model specification
- Identifies columns present in spec but missing from schema (potential issues)
- Identifies columns present in schema but not referenced in spec (informational)
- Provides detailed reporting on schema-spec alignment
Prerequisites:
- Fiddler URL and valid API token
- Model ID of the model to validate
Key features:
- Validates all spec categories: inputs, outputs, targets, decisions, metadata, custom_features
- Provides clear visual indicators (✅ for success,
⚠️ for warnings, ℹ️ for info) - Reports total column counts for both schema and spec
- Lists specific column names for any discrepancies found
Usage notes:
- Essential for troubleshooting model configuration issues
- Helps ensure data consistency and proper model monitoring setup
- Can be run as a health check after model updates or schema changes
- Useful for debugging publishing or monitoring issues related to column mismatches
A utility for programmatically adding charts to Fiddler projects and models:
- Uses Fiddler's API to create charts without using the UI
- Extracts chart definitions from browser network requests
- Handles mapping of project, model, and baseline IDs automatically
- Supports custom metrics and segment-specific charts
Prerequisites:
- Fiddler URL and valid API token
- Existing project and model
- Chart definition (obtained from browser network inspection)
Key features:
- Preserves chart metadata including title, description, and visualization type
- Configures time ranges and bin sizes for temporal charts
- Maps segment names to their corresponding IDs
- Supports custom metrics by name reference
Usage notes:
- Useful for bulk chart creation across multiple models
- Enables chart replication between environments
- Chart definitions can be captured from existing charts via browser dev tools
- Chart definitions can be modified programmatically before creation
A utility for fetching performance metrics for binary classification models from the Fiddler API:
- Retrieves key performance metrics (accuracy, precision, recall, F1 score, AUC) via API
- Supports different data sources (PRODUCTION, BASELINE, VALIDATION)
- Exports metrics to CSV format for further analysis
- Provides comprehensive error handling and user feedback
Prerequisites:
- Fiddler URL and valid API token
- Model ID of the target binary classification model
- Model must have performance data available in the specified environment
Key features:
- Uses the
/v3/analytics/metricsAPI endpoint with POST requests - Handles both list and dictionary response formats from the API
- Validates metric availability and reports missing metrics
- Creates formatted CSV output with metric names and values
- Includes detailed logging and status indicators
Usage notes:
- Designed specifically for binary classification models
- Can be easily modified to support additional metric types
- Useful for automated reporting and metric extraction workflows
- CSV output can be integrated into broader analytics pipelines
A quick utility script for retrieving column name to column ID mappings from the Fiddler API:
- Fetches model schema directly via Fiddler API endpoint
- Returns dictionary mapping column names to their internal UUIDs
- Useful for debugging and understanding internal model structure
- Simple standalone script with minimal dependencies
Prerequisites:
- Fiddler instance URL
- Valid API token
- Model ID (UUID) of the target model
Key features:
- Direct API call to
/v3/models/{model_id}endpoint - Parses schema from API response
- Returns clean dictionary of column name → column ID mappings
- Includes example usage in main block
Usage notes:
- Helpful for troubleshooting issues related to column references
- Can be used to understand how Fiddler internally tracks columns
- Useful when working with unofficial API endpoints that require column IDs
- Minimal error handling for quick debugging use
A data preparation notebook for NLP examples using the 20Newsgroups dataset:
- Fetches the public 20Newsgroups dataset from scikit-learn
- Groups original 20 topics into 6 general categories (computer, politics, recreation, science, religion, forsale)
- Applies text preprocessing and filtering
- Generates embeddings using both TF-IDF and OpenAI text-embedding-ada-002
- Exports preprocessed data and embeddings to CSV files
Prerequisites:
- scikit-learn for dataset access
- OpenAI API key (set as environment variable
OPENAI_API_KEY) - pandas, numpy for data manipulation
Key features:
- Filters dataset to remove headers, footers, and quotes
- Applies token and string length limits for OpenAI compatibility
- Batch processing for OpenAI embeddings (up to 2000 samples per batch)
- TF-IDF vectorization with customizable parameters
- Outputs three CSV files: preprocessed text, OpenAI embeddings, TF-IDF embeddings
Usage notes:
- Generates reusable assets for NLP model monitoring examples
- Can be adapted for other text classification datasets
- OpenAI embeddings may incur API costs
- Useful for creating baseline datasets for text monitoring use cases
A utility for bulk generation of custom metrics and segments from pipe-delimited template files:
- Creates multiple custom metrics and segments from a single template definition
- Supports placeholder substitution for generating related objects (e.g., per-task-type metrics)
- Includes dry-run mode for previewing generated objects before creation
- Validates FQL syntax and column references against model schema
- Detects and skips duplicate objects
Prerequisites:
- Fiddler URL and valid API token
- Target project and model must exist
- Pipe-delimited template file (
.psvformat)
Input file format:
Pipe-delimited file with columns: name_template, type (metric/segment), formula_template, description_template, expand_values
Key features:
- Template expansion with
{placeholder}syntax - Column validation ensures all referenced columns exist in model schema
- Duplicate detection prevents creating objects that already exist
- Detailed error reporting for validation failures
- Batch creation with progress tracking
Usage notes:
- Ideal for creating task-specific or cohort-specific metrics at scale
- Example template file:
custom_metrics_templates.psv - Supports single placeholder expansion (creates N objects from 1 template)
- Useful for standardizing metrics across multiple models or projects
A comprehensive model comparison utility using the fiddler_utils package:
- Compares two Fiddler models across multiple dimensions (configuration, schema, spec, assets)
- Supports cross-instance comparisons (different Fiddler deployments)
- Uses
ModelComparatorfromfiddler_utilswith flexible configuration presets - Generates formatted markdown reports with detailed statistics
Prerequisites:
fiddler_utilspackage installed (pip install -e .from repo root)- Source and target Fiddler instance URLs and API tokens
- Both models must exist
Key features:
- Compares configuration (task type, event columns, task parameters)
- Compares schema (columns, roles, data types, ranges, categories)
- Compares spec (inputs, outputs, targets, metadata, custom features)
- Compares assets (segments, custom metrics, alerts, baselines, charts)
- Flexible comparison presets:
all(),schema_only(),no_assets() - Exports results to markdown, JSON, CSV, or DataFrame
Usage notes:
- Use cases: version comparison, cross-environment validation, migration planning
- Programmatic access to comparison results for automation
- Detailed summary statistics with visual indicators
- Safe for comparing models across different Fiddler instances
A complete model export and import utility using the fiddler_utils package:
- Exports complete model definitions including schema, spec, task configuration, baselines, and related assets
- Imports models to different projects or instances without requiring a DataFrame
- Supports multiple import modes:
create_new,create_version,update_existing - Uses
ModelManagerfromfiddler_utilsfor deterministic and reproducible imports
Prerequisites:
fiddler_utilspackage installed (pip install -e .from repo root)- Source and target Fiddler instance URLs and API tokens
- Source model must exist (target model will be created)
Supported features:
- Model schema with complete column definitions (types, ranges, categories)
- Model spec (inputs, outputs, targets, metadata, decisions, custom features)
- Task configuration and parameters
- Rolling baselines (auto-created)
- Segments and custom metrics (with FQL validation)
- Model versioning support
Partial support:
- Static baselines (exported but require manual dataset publishing)
- Alerts (exported for reference but require manual creation due to metric ID mapping)
Key features:
- No DataFrame required for import (uses Model constructor pattern)
- JSON serialization for archival and version control
- Cross-instance transfer support with
ConnectionManager - Safe defaults (fails if model already exists in
create_newmode) - Display and comparison utilities included
Usage notes:
- Ideal for environment promotion (dev → staging → prod)
- Model artifact files must be manually re-uploaded after import
- Deterministic imports ensure consistency across environments
- Can create new versions of existing models with
create_versionmode
An asset-level export and import utility using the fiddler_utils package:
- Exports and imports segments, custom metrics, and charts between models
- Supports cross-instance transfers with automatic schema validation
- Uses
SegmentManager,CustomMetricManager, andChartManagerfromfiddler_utils - Includes dry-run mode and comprehensive error handling
Prerequisites:
fiddler_utilspackage installed (pip install -e .from repo root)- Source and target Fiddler instance URLs and API tokens
- Source and target models must exist
Key features:
- Schema comparison before import to identify compatibility issues
- FQL column reference validation (detects missing columns)
- Automatic duplicate detection with skip option
- Dry-run mode for validation without creating assets
- Detailed import results with success/skip/failure counts
- Chart export from dashboards with dependency analysis
Asset types supported:
- Segments (with FQL definition validation)
- Custom metrics (with FQL definition validation)
- Charts (using unofficial API, may change)
Usage notes:
- Validates that all columns referenced in FQL exist in target model
- Skips assets with invalid column references (with detailed error messages)
- Useful for migrating monitoring configurations between models
- Chart import includes baseline, custom metric, and segment dependency mapping
- Can filter assets by name during export (export subset of assets)
A comprehensive tutorial demonstrating FQL (Fiddler Query Language) utilities from the fiddler_utils package:
- Teaches FQL syntax fundamentals (column names, string values, operators, functions)
- Demonstrates all eight core FQL utility functions with practical examples
- Analyzes real segments and custom metrics from live Fiddler models
- Validates FQL expressions against model schemas for compatibility
- Shows column mapping workflows for migrating assets between models with different schemas
Prerequisites:
fiddler_utilspackage installed (adds parent directory to Python path)- Fiddler URL and valid API token for live examples (Sections 2-5)
- At least one model with segments or custom metrics for demonstration
Core utilities covered:
extract_columns()- Extract column references from FQL expressionsvalidate_fql_syntax()- Perform basic syntax validation and error detectionvalidate_column_references()- Check if all columns exist in target schemareplace_column_names()- Transform expressions with column name mappingsnormalize_expression()- Standardize formatting for expression comparisonget_fql_functions()- Identify all function calls in expressionsis_simple_filter()- Distinguish simple filters from aggregationssplit_fql_and_condition()- Break down complex AND conditions
Key features:
- Six comprehensive sections from fundamentals to advanced patterns
- Standalone examples that work without Fiddler connection
- Live Fiddler integration examples analyzing real assets
- Schema validation workflows with missing column detection
- End-to-end asset migration workflows with column mapping
- Advanced patterns including validation pipelines and batch transformations
- Interactive examples with dry-run mode for safe experimentation
Usage notes:
- Sections 1 and 5 work standalone without Fiddler connection
- Sections 2-4 require Fiddler access and analyze real model assets
- Includes detailed summary with best practices and common gotchas
- Useful for learning FQL manipulation before asset migration
- Complements
export_import_model_assets.ipynbwith FQL-specific capabilities - Demonstrates safe transformation workflows with comprehensive validation
These notebooks demonstrate practical solutions for common Fiddler administrative tasks that might be encountered by customer success and field AI engineers.
NEW - A comprehensive demonstration notebook for FQL (Fiddler Query Language) utilities that help developers work with Custom Metrics and Segments.
What it demonstrates:
Sections 1-5: Original FQL Utilities
- Standalone FQL parsing and validation
- Live Fiddler integration for analyzing existing metrics/segments
- Schema compatibility validation
- Column mapping for cross-model migration
- Advanced patterns and best practices
Section 6: UUID Reference Management (NEW)
- The UUID Problem: Custom Metrics cannot be edited - deletion + recreation generates new UUID, breaking all Chart and Alert references
- Reference discovery: Find all Charts/Alerts using a metric before updating
- Safe metric updates with automatic reference migration
safe_update_metric()function that handles the entire workflow
Section 7: FQL Testing Workflows (NEW)
- The Challenge: Fiddler has no "dry-run" API - must create metrics to validate FQL
- Local pre-validation (fast syntax/schema checks)
- Temporary metric testing in Fiddler (real validation)
- Automatic cleanup of test metrics
- Batch testing multiple definitions
Prerequisites:
- Fiddler environment with models containing segments or custom metrics
- API token with read access (write access for Sections 5-7)
- Python packages:
fiddler-client,fiddler_utils
Key utilities provided in fiddler_utils package:
Core FQL Module (fiddler_utils/fql.py):
extract_columns()- Find all column references in FQLvalidate_fql_syntax()- Catch syntax errors (quotes, parens)validate_column_references()- Check schema compatibilityreplace_column_names()- Transform expressions for migrationnormalize_expression()- Standardize formattingget_fql_functions()- Identify functions usedis_simple_filter()- Distinguish filters from aggregations
Reference Management (fiddler_utils/assets/references.py):
find_charts_using_metric()- Find charts referencing a metric UUIDfind_alerts_using_metric()- Find alerts monitoring a metric UUIDfind_all_metric_references()- Comprehensive reference discoverysafe_update_metric()- Update metric with automatic reference migrationmigrate_chart_metric_reference()- Update chart to reference new UUIDmigrate_alert_metric_reference()- Update alert to reference new UUID
FQL Testing (fiddler_utils/testing.py):
validate_metric_syntax_local()- Fast local validation (no API calls)test_metric_definition()- Test FQL by creating temp metric in Fiddlervalidate_and_preview_metric()- Complete validation workflowbatch_test_metrics()- Test multiple definitions efficientlycleanup_orphaned_test_metrics()- Remove leftover test metrics
Use cases:
- Cross-model migration: Copy custom metrics/segments between models with different schemas
- Safe metric updates: Update metric definitions without breaking dashboards
- Pre-deployment validation: Test FQL before creating production metrics
- Impact analysis: Understand dependencies before schema changes
- Iterative development: Test and refine FQL definitions efficiently
Critical limitations documented:
- Custom Metrics still cannot be truly edited (API limitation)
- Local validation cannot test Fiddler-specific functions (tp(), fp(), jsd(), etc.)
- Chart migration uses unofficial API (may change)
- Testing requires creating temporary metrics in Fiddler
Honest assessment: These utilities solve ~60% of Custom Metric pain points:
- ✅ Migration workflows (95% coverage)
- ✅ Reference management (85% coverage)
⚠️ Iterative development (40% coverage - still requires delete/recreate)⚠️ FQL validation (60% coverage - local checks only)- ❌ Metric editability (0% - requires API changes)