A comprehensive Python application that extracts Intercom conversation data and generates two types of Gamma presentations:
- Voice of Customer Analysis - Monthly executive reports with specific metrics and insights
- General Purpose Trend Analysis - Flexible reports for any time period with customizable focus areas
β‘ Latest Update (v3.1.0): Migrated to official Intercom Python SDK for improved reliability, type safety, and future compatibility. All custom Intercom API clients have been replaced with the official
python-intercomSDK, providing better error handling, automatic pagination, and built-in retry logic.
This tool now uses the official Intercom Python SDK (python-intercom) for all API interactions, providing:
- Type-safe Pydantic models for all API entities
- Built-in pagination with
AsyncPagersupport - Comprehensive error handling with specific exception types
- Automatic rate limiting and retry logic
- Modern async/await patterns for efficient data fetching
The SDK integration is wrapped in IntercomSDKService which maintains backward compatibility with existing analyzers and services while leveraging the official SDK's capabilities.
- Volume Metrics: Total conversations, AI resolution rate, response times
- Efficiency Metrics: Median first response time, handling time, resolution time
- Satisfaction: CSAT scores by user tier (Pro, Plus, Free)
- Channel Analysis: Chat vs Email performance
- Topic Breakdown: Billing, Product Questions, Account Questions
- Geographic Segmentation: Tier 1 countries with specific metrics
- Friction Points: Common customer pain points and escalations
- Success Stories: Positive customer feedback and quotes
- Flexible time periods (daily, weekly, monthly, custom ranges)
- Customizable metrics based on specific business questions
- Ad-hoc insights for one-off investigations
- Trend identification across any dimension
# Clone the repository
git clone <repository-url>
cd intercom-analyzer
# Install dependencies
pip install -r requirements.txt# Copy environment template
cp .env.example .env
# Edit .env with your API keys
INTERCOM_ACCESS_TOKEN=your_intercom_token_here
OPENAI_API_KEY=your_openai_api_key_here
GAMMA_API_KEY=your_gamma_api_key_here # Optionalpython -m src.main testpython -m src.main voice --month 5 --year 2024 --tier1-countries "US,Brazil,Canada,Mexico,France,UK,Germany,Spain,South Korea,Japan,Australia" --generate-gammapython -m src.main trends --start-date 2024-05-01 --end-date 2024-05-31 --focus-areas "billing,product,escalations" --generate-gammapython -m src.main custom --prompt-file custom_prompts/feature_launch_analysis.txt --start-date 2024-05-15 --end-date 2024-05-31 --generate-gamma# Monthly executive report for May 2024
python -m src.main voice --month 5 --year 2024 --generate-gamma
# With custom tier 1 countries
python -m src.main voice --month 5 --year 2024 --tier1-countries "US,Canada,UK,Germany" --generate-gamma# Last 30 days with billing focus
python -m src.main trends --start-date 2024-04-01 --end-date 2024-04-30 --focus-areas "billing,payment,subscription"
# Custom date range with multiple focus areas
python -m src.main trends --start-date 2024-03-01 --end-date 2024-03-31 --focus-areas "product,technical,escalations" --custom-prompt "Focus on customer satisfaction trends"
# Generate Gamma presentation
python -m src.main trends --start-date 2024-05-01 --end-date 2024-05-31 --generate-gamma --output-format gamma# Create custom prompt file
echo "Analyze customer feedback trends and identify opportunities for product improvement" > custom_prompts/product_feedback.txt
# Run custom analysis
python -m src.main custom --prompt-file custom_prompts/product_feedback.txt --start-date 2024-05-01 --end-date 2024-05-31intercom-analyzer/
βββ src/
β βββ config/
β β βββ settings.py # Pydantic settings
β β βββ prompts.py # Prompt templates for both modes
β β βββ metrics_config.py # Metric definitions and calculations
β βββ services/
β β βββ intercom_service.py # Intercom API integration
β β βββ metrics_calculator.py # Business metrics calculations
β β βββ openai_client.py # OpenAI GPT-4o integration
β β βββ gamma_client.py # Gamma presentation generation
β βββ models/
β β βββ analysis_models.py # Pydantic data models
β βββ analyzers/
β β βββ base_analyzer.py # Base analysis class
β β βββ voice_analyzer.py # Voice of customer analysis
β β βββ trend_analyzer.py # General purpose trend analysis
β βββ agents/
β β βββ base_agent.py # Base agent class
β β βββ fin_performance_agent.py # Finn AI performance analysis
β β βββ subtopic_detection_agent.py # Sub-topic detection
β βββ utils/
β β βββ logger.py # Logging utility
β βββ main.py # CLI application entry point
βββ requirements.txt
βββ .env.example
βββ README.md
βββ pyproject.toml
The TopicOrchestrator coordinates a 7-phase analysis pipeline:
- Separates paid vs free tier conversations
- Identifies Finn AI-participated conversations
- Segments by customer type and language
- Detects Tier 1 topics from Intercom data
- Calculates topic distribution and volume
- Maps conversations to primary topics
- Extracts Tier 2 sub-topics from Intercom structured data (tags, custom attributes, conversation topics)
- Discovers Tier 3 emerging themes using LLM semantic analysis
- Calculates percentage breakdowns for sub-topics within each Tier 1 category
- Provides granular context for downstream Finn performance and output formatting
- Runs after Topic Detection but before Per-Topic Analysis
- Analyzes sentiment for each topic (parallel execution)
- Extracts representative examples
- Generates topic-specific insights
- Sub-topic performance breakdown (when SubTopicDetectionAgent is enabled)
- Data-rooted quality metrics: resolution rate, knowledge gap rate, escalation rate, average conversation rating
- Tier 2 sub-topics from Intercom data (tags, custom attributes, topics)
- Tier 3 emerging themes from LLM analysis
- Separate analysis for free vs paid tiers
- Identifies week-over-week trends
- Highlights significant changes in volume, sentiment, and topics
- Provides trend interpretations
The OutputFormatterAgent formats all analysis results into Hilary's exact card structure for Gamma presentations:
- 3-tier sub-topic hierarchies within topic cards
- Tier 2 sub-topics from Intercom data (tags, custom attributes, topics)
- Tier 3 AI-discovered themes from LLM analysis
- Sub-topic performance metrics in Finn cards (resolution rate, knowledge gaps, escalation rate, average rating)
- Graceful backward compatibility - handles absence of sub-topic data seamlessly
The orchestrator tracks comprehensive metrics across all phases:
- Phase timings: Execution time for each phase including sub-topic detection
- LLM usage: Total token counts and API calls
- Sub-topic statistics: Tier 2 and Tier 3 counts per topic
- Agent performance: Success rates and confidence scores
- Quality indicators: Example counts, topic coverage, error rates
# Required
INTERCOM_ACCESS_TOKEN=your_token_here
OPENAI_API_KEY=your_openai_key_here
# Optional
GAMMA_API_KEY=your_gamma_key_here
DEFAULT_TIER1_COUNTRIES=US,Brazil,Canada,Mexico,France,UK,Germany,Spain,South Korea,Japan,Australia
OUTPUT_DIRECTORY=outputs
LOG_LEVEL=INFOEdit src/config/metrics_config.py to customize:
- Metric definitions and calculations
- Business KPI configurations
- Analysis parameters
- Executive Summary with key metrics and performance assessment
- Tier 1 Country Analysis with specific regional insights
- Month-over-Month Comparisons for trend identification
- Detailed Breakdowns by topic, channel, and satisfaction
- Customer Quotes with context and significance
- Actionable Recommendations for support optimization
- Volume Trends over time with peak analysis
- Response Time Trends with efficiency metrics
- Satisfaction Trends with sentiment analysis
- Topic Trends with keyword frequency analysis
- Custom Insights based on focus areas
- Trend Explanations with business implications
- Markdown Reports (
*.md) - Human-readable analysis - JSON Data (
*.json) - Structured data for further processing - Gamma Presentations - Professional presentations with images and formatting
The tool automatically generates professional Gamma presentations with:
- Executive-ready formatting matching business standards
- Unsplash images for visual appeal
- Proper markdown structure optimized for Gamma
- Interactive elements and professional styling
The tool provides two complementary web interfaces:
- Purpose: Run new analyses and configure parameters
- Port: 3000 (or
PORTenv var in production) - Features:
- Interactive form for selecting analysis types
- Real-time streaming of analysis output
- AI model selection (GPT-4o or Claude)
- Test mode and sample mode options
- File downloads and Gamma presentation access
- Access:
- Local:
python deploy/railway_web.pyβ http://localhost:3000 - Production: Your main Railway deployment URL
- Local:
- Navigation: Click "π View Historical Analysis" button to access historical timeline
- Purpose: View and compare past analysis snapshots
- Port: 8000 (or
PORTenv var) - Features:
- Timeline view of weekly/monthly/quarterly snapshots
- Trend charts and volume comparisons
- Review management (mark snapshots as reviewed)
- Side-by-side period comparisons
- Access:
- Local:
python railway_web.pyβ http://localhost:8000 - Production: Set
HISTORICAL_UI_URLenv var in main UI
- Local:
- Navigation: Click "β Back to Main UI" to return to analysis interface
For Local Development:
- Run both servers simultaneously on different ports
- Main UI automatically links to http://localhost:8000 for historical view
For Production (Railway):
- Deploy main analysis UI as primary service
- Deploy historical timeline UI as a separate service (optional)
- Set
HISTORICAL_UI_URLenvironment variable in main UI to point to timeline service - Example:
HISTORICAL_UI_URL=https://your-historical-ui.up.railway.app
The Historical Timeline UI provides a visual interface for exploring historical Voice of Customer analysis snapshots.
- Timeline View: Browse weekly, monthly, and quarterly analysis snapshots
- Visual Indicators:
- β Reviewed snapshots (green border)
- β Current period (orange highlight)
- Future periods (dashed border)
- Review Management: Mark snapshots as reviewed with notes
- Trend Visualization: Chart.js charts show topic volume trends (when β₯4 weeks of data)
- Comparison View: Side-by-side comparison of any two periods
- Snapshot Details: View full analysis reports for any period
Local Development:
python railway_web.py
# Visit http://localhost:8000Railway Deployment:
# Deployed automatically to Railway
# Visit your Railway app URLPublic Endpoints (No Auth Required):
GET /- Timeline UIGET /api/snapshots/list- List all snapshotsGET /api/snapshots/{id}- Get single snapshotGET /api/snapshots/timeseries- Get time-series data for chartsGET /analysis/history- Timeline UI (same as root)GET /analysis/view/{id}- Snapshot detail viewGET /analysis/compare/{current}/{prior}- Comparison viewGET /health- Health check
Protected Endpoints (Require Auth Token):
POST /api/snapshots/{id}/review- Mark snapshot as reviewed- Header:
Authorization: Bearer <token> - Body:
{"reviewed_by": "user@example.com", "notes": "Optional notes"}
- Header:
Set the EXECUTION_API_TOKEN environment variable to enable authentication for review endpoints:
export EXECUTION_API_TOKEN="your-secret-token"If not set, the app runs in development mode (no auth required).
Snapshots are automatically saved to DuckDB after each VoC analysis. The database includes:
- Analysis snapshots (weekly/monthly/quarterly)
- Comparative analyses (week-over-week deltas)
- Metrics time-series (for trend charts)
The UI displays available capabilities based on data history:
- Week 1: Basic snapshot viewing
- Week 2+: Week-over-week comparison
- Week 4+: Trend analysis and forecasting
- Week 12+: Seasonality detection
- Complete data fetching with pagination (no 150 limit)
- Rate limiting and error handling
- Flexible querying by date range, text search, and filters
- Real-time progress tracking for large datasets
- GPT-4o powered insights for sophisticated analysis
- Customizable prompts for different analysis types
- Sentiment analysis and trend identification
- Executive-friendly summaries and recommendations
- Professional presentation generation
- Template customization and styling options
- Image integration and visual enhancements
- Export options for different formats
The tool includes verification scripts to help operators validate date calculations, API filters, and conversation counts:
Prints Pacific and UTC timestamps with expected API filter windows to verify date boundary inclusion.
# Verify a single date
python scripts/verify_date_calculation.py --date 2024-05-15
# Verify a date range
python scripts/verify_date_calculation.py --start-date 2024-05-01 --end-date 2024-05-31
# Verify current date
python scripts/verify_date_calculation.py --date todayOutput includes:
- Pacific and UTC timestamp conversions
- Unix timestamp values for API filters
- Expected boundary inclusion behavior
- Example conversation timestamps and their inclusion status
Calls fetch_conversations_by_date_range() for tight windows and validates that returned conversations match the requested date range.
# Test a single date
python scripts/test_api_date_filter.py --date 2024-05-15
# Test a tight date range (3 days)
python scripts/test_api_date_filter.py --start-date 2024-05-01 --end-date 2024-05-03
# Test with conversation limit (faster testing)
python scripts/test_api_date_filter.py --start-date 2024-05-15 --end-date 2024-05-15 --max 100Output includes:
- Requested vs actual date ranges in results
- Boundary validation (earliest/latest conversations)
- Detection of conversations outside requested range
- Sample timestamps from fetched data
- Distribution by day
Estimates counts over date ranges using get_conversation_count() and compares with chunked fetching to verify consistency.
# Diagnose last 7 days
python scripts/diagnose_conversation_count.py --days 7
# Diagnose last 30 days (skip fetch for speed)
python scripts/diagnose_conversation_count.py --days 30 --skip-fetch
# Diagnose specific date range
python scripts/diagnose_conversation_count.py --start-date 2024-05-01 --end-date 2024-05-31
# Diagnose with fetch limit (faster)
python scripts/diagnose_conversation_count.py --days 7 --max 500Output includes:
- API count vs fetched count comparison
- Discrepancy percentage and analysis
- Possible explanations for differences
- Date range validation in fetched data
- Daily distribution with deviation from average
Use these commands for routine verification:
# Quick check: Verify today's date calculation
python scripts/verify_date_calculation.py --date today
# Quick check: Test API filter for today (limited fetch)
python scripts/test_api_date_filter.py --date today --max 50
# Quick check: Compare counts for last 7 days (skip full fetch)
python scripts/diagnose_conversation_count.py --days 7 --skip-fetchCreate custom analysis prompts in custom_prompts/ directory:
# custom_prompts/feature_launch_analysis.txt
Analyze customer feedback related to the new feature launch:
- Identify common issues and pain points
- Measure adoption and usage patterns
- Provide recommendations for improvement
- Include specific customer quotes and examples# Analyze multiple months
for month in {1..6}; do
python -m src.main voice --month $month --year 2024 --generate-gamma
doneSet up cron jobs for automated monthly reports:
# Monthly Voice of Customer report
0 9 1 * * cd /path/to/intercom-analyzer && python -m src.main voice --month $(date +%m) --year $(date +%Y) --generate-gammapython -m pytest tests/# Format code
black src/
isort src/
# Type checking
mypy src/- Define metric in
src/config/metrics_config.py - Implement calculation in
src/services/metrics_calculator.py - Add to analysis models in
src/models/analysis_models.py - Update prompt templates in
src/config/prompts.py
- Python 3.9+
- Intercom API access with conversation read permissions
- OpenAI API key for GPT-4o access
- Gamma API key (optional, for presentation generation)
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
For issues or questions:
- Check the troubleshooting section in the README
- Review logs in
outputs/intercom_analysis.log - Test with small datasets first
- Verify API keys and permissions
Transform your Intercom data into actionable insights with professional Gamma presentations! π