Skip to content

rhelmer/attribution-mcp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

26 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Analytics Report Generator

An automated analytics reporting tool that leverages the Model Context Protocol (MCP) with Ollama, Cloudflare Workers, or Google Gemini to generate intelligent reports from Umami analytics data.

Blog post: https://www.rhelmer.org/blog/ai-powered-analytics-reports-using-mcp/

Overview

This project combines several powerful tools to create automated analytics reports:

  • MCP Client: Direct use of the Model Context Protocol for orchestrating AI interactions
  • Ollama/Cloudflare Workers/Google Gemini: LLM inference backends
  • Umami MCP Server: Connects to your Umami analytics instance to fetch website data (included in this repo)
  • Automated Reporting: Generates comprehensive analytics reports using AI

Features

  • πŸ€– AI-Powered Analysis: Uses large language models to analyze website analytics data
  • πŸ“Š Comprehensive Reports: Generates detailed insights from your Umami analytics
  • πŸ”„ Flexible Backends: Choose between local Ollama, Cloudflare Workers, or Google Gemini
  • πŸ’¬ Interactive Mode: Chat interface for exploring your analytics data
  • πŸš€ Easy Setup: Simple installation and configuration process
  • πŸ”’ Zero-dependency MCP Server: The included Umami MCP server uses only Python standard library
  • πŸ“ˆ UTM Tracking: Analyze campaign performance with UTM parameter support

Prerequisites

  • Python 3.8+
  • Access to an Umami analytics instance (cloud or self-hosted)
  • An AI provider:
    • Ollama installed locally with a Llama model, OR
    • A Cloudflare Workers account with AI access, OR
    • A Google AI Studio API key for Gemini

Installation

  1. Clone the repository

    git clone <your-repo-url>
    cd <your-repo-name>
  2. Install dependencies

    pip install uv
  3. Set up environment variables

    cp .env.example .env

    Edit .env with your configuration:

    # Umami Configuration (Self-hosted with Team)
    UMAMI_URL=https://your-umami-instance.com
    UMAMI_USERNAME=your_username
    UMAMI_PASSWORD=your_password
    UMAMI_TEAM_ID=your-team-id
    
    # Cloudflare AI Configuration
    CLOUDFLARE_ACCOUNT_ID=your-cloudflare-account-id
    CLOUDFLARE_API_TOKEN=your-cloudflare-api-token
    
    # Gemini API Configuration
    GEMINI_API_KEY=your-gemini-api-key

Configuration

Umami Setup

This project includes a zero-dependency Umami MCP server in the umami_mcp_server/ directory.

For Umami Cloud:

  1. Go to Settings β†’ API Keys in your Umami Cloud dashboard
  2. Create an API key
  3. Add to your .env file:
    UMAMI_URL=https://api.umami.is
    UMAMI_API_KEY=your_api_key_here

For Self-hosted Umami:

  1. Use your login credentials
  2. Add to your .env file:
    UMAMI_URL=https://your-umami-instance.com
    UMAMI_USERNAME=admin
    UMAMI_PASSWORD=your_password
    UMAMI_TEAM_ID=your-team-id  # Optional: for team-based access

The server auto-detects which mode to use based on which environment variables are set.

AI Backend Setup

Option 1: Ollama (Local)

# Install Ollama
## Linux
curl -fsSL https://ollama.ai/install.sh | sh
## macOS
brew install ollama

# Start Ollama service
ollama serve

# Pull Llama model
ollama pull llama3.2

Option 2: Cloudflare Workers

  1. Sign up for Cloudflare Workers
  2. Enable AI features in your account
  3. Get your API token and account ID
  4. Add credentials to .env file

Option 3: Google Gemini

  1. Go to Google AI Studio.
  2. Create an API key.
  3. Add the GEMINI_API_KEY to your .env file.
  4. Ensure you have Node.js and npx installed to use the gemini-cli provider. The script will fall back to the REST API if the CLI is not available.

Usage

You can specify the AI provider using the --ai-provider flag. Supported providers are cloudflare, ollama, and gemini-cli.

Interactive Chat Mode

Start an interactive session to explore your analytics data:

# Using Gemini (recommended)
uv run --with-requirements requirements.txt run.py --start-date 2025-01-01 --end-date 2025-12-31 --website example.com --chat --ai-provider gemini-cli

# Using Ollama
uv run --with-requirements requirements.txt run.py --start-date 2025-01-01 --end-date 2025-12-31 --website example.com --chat --ai-provider ollama

# Using Cloudflare
uv run --with-requirements requirements.txt run.py --start-date 2025-01-01 --end-date 2025-12-31 --website example.com --chat --ai-provider cloudflare

Example interactions:

  • "Show me a summary of last month's traffic"
  • "What are my top pages this week?"
  • "Generate a comprehensive monthly report"
  • "Compare this month's performance to last month"
  • "Which UTM campaigns drove the most traffic?"
  • "What are my top referrers from social media?"

Command Line Reports

Generate specific reports directly:

# Custom date range with Gemini
uv run --with-requirements requirements.txt run.py --start-date 2025-01-01 --end-date 2025-12-31 --website example.com --ai-provider gemini-cli

# Using Ollama
uv run --with-requirements requirements.txt run.py --start-date 2025-01-01 --end-date 2025-12-31 --website example.com --ai-provider ollama

Automated Scheduling

Set up automated report generation using cron:

# Add to crontab for weekly reports every Monday at 9 AM
0 9 * * 1 cd /path/to/project && uv run --with-requirements requirements.txt run.py --start-date 2025-01-01 --end-date 2025-12-31 --website example.com --ai-provider gemini-cli

Report Types

  • Traffic Summary: Page views, unique visitors, sessions, bounce rate
  • Top Content: Most popular pages and referrers
  • Geographic Analysis: Visitor locations and demographics
  • Device & Browser: Technology usage patterns
  • UTM Campaign Tracking: Performance by source, medium, campaign, content, and term
  • Performance Trends: Growth metrics and comparisons
  • Custom Insights: AI-generated observations and recommendations

Troubleshooting

Common Issues

Connection Errors

# Check Umami API connectivity
curl -u user:pass https://your-umami-instance.com/api/auth/login

Ollama Issues

# Verify Ollama is running
ollama list
ollama ps

Missing Website Data

  • Ensure the user has access to the website in Umami admin
  • For team-based access, set UMAMI_TEAM_ID in your .env file
  • Check that the website domain matches exactly (e.g., example.com not www.example.com)

Development

Project Structure

β”œβ”€β”€ run.py                      # Main application entry point
β”œβ”€β”€ requirements.txt            # Python dependencies
β”œβ”€β”€ pyproject.toml              # MCP server package config
β”œβ”€β”€ umami_mcp_server/           # Zero-dependency Umami MCP server
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ __main__.py
β”‚   β”œβ”€β”€ server.py               # MCP server implementation
β”‚   β”œβ”€β”€ umami_client.py         # Umami API client
β”‚   └── README.md               # MCP server documentation
└── .env.example                # Environment variables template

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

Dependencies

  • mcp: Model Context Protocol SDK
  • python-dotenv: Environment variable management
  • aiohttp: Async HTTP client for AI providers

Support

For issues and questions:

Acknowledgments

About

Automatic report generator for Umami using MCP server and LLMs

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages