Skip to content

wyre-technology/autotask-data-sink

Repository files navigation

Autotask Data Sink

A robust data synchronization service that extracts data from Kaseya Autotask's REST API and stores it in PostgreSQL for advanced analytics and monitoring.

Features

  • 🔄 Continuous Sync: Automated data synchronization with configurable intervals
  • 📊 Analytics Ready: Pre-built analytics endpoints for Grafana integration
  • 🏢 Company Health Monitoring: Track contract profitability and customer metrics
  • 🎯 Smart Discovery: Automatic API analysis and sync optimization
  • 📈 Real-time Monitoring: Health checks, sync status, and performance metrics
  • 🔍 Historical Tracking: Complete change history with audit trails
  • High Performance: Connection pooling, rate limiting, and incremental sync

Architecture

Autotask API → Sync Engine → PostgreSQL → Analytics API → Grafana

Key Components

  • Autotask API Client: Wrapper around autotask-node with rate limiting and pagination
  • Sync Engine: Cron-based scheduling with incremental updates
  • Database Layer: PostgreSQL with historical tracking and materialized views
  • Web API: Monitoring endpoints and analytics for Grafana
  • Discovery Service: API analysis and sync recommendations

Quick Start

Prerequisites

  • Node.js 18+
  • PostgreSQL 14+
  • Autotask API credentials (username, integration code, secret)

Installation

  1. Clone the repository

    git clone <repository-url>
    cd autotask-data-sink
  2. Install dependencies

    npm install
  3. Configure environment

    cp env.example .env
    # Edit .env with your configuration
  4. Set up database

    # Create PostgreSQL database
    createdb autotask_data_sink
    
    # Run migrations
    npm run db:migrate
  5. Start the service

    npm run dev

Configuration

Environment Variables

Variable Description Required Default
AUTOTASK_USER Autotask username -
AUTOTASK_INTEGRATION_CODE Integration code -
AUTOTASK_SECRET API secret -
AUTOTASK_API_URL API endpoint URL Auto-detected
DB_HOST PostgreSQL host localhost
DB_PORT PostgreSQL port 5432
DB_NAME Database name autotask_data_sink
DB_USER Database user postgres
DB_PASSWORD Database password -
API_KEY API key for endpoint auth -
WEB_PORT Web service port 3000
LOG_LEVEL Logging level info

Sync Intervals

Configure sync frequencies in milliseconds:

SYNC_INTERVAL_TICKETS=300000          # 5 minutes
SYNC_INTERVAL_TIME_ENTRIES=300000     # 5 minutes  
SYNC_INTERVAL_COMPANIES=86400000      # 24 hours
SYNC_INTERVAL_CONTACTS=86400000       # 24 hours
SYNC_INTERVAL_CONTRACTS=3600000       # 1 hour
SYNC_INTERVAL_PROJECTS=3600000        # 1 hour

Usage

Running the Application

# Development with hot reload
npm run dev

# Production build and start
npm run build
npm start

# Run tests
npm test

Database Operations

# Check migration status
npm run db:migrate -- --status

# Reset and migrate database
npm run db:migrate -- --reset-and-migrate

# Reset database only
npm run db:migrate -- --reset

Manual Sync Operations

All /api/* endpoints require authentication via X-API-Key header or Authorization: Bearer <key>.

# Trigger discovery analysis
curl -X POST -H "X-API-Key: your_key" http://localhost:3000/api/discovery/run

# Trigger manual sync for specific entity
curl -X POST -H "X-API-Key: your_key" http://localhost:3000/api/sync/trigger/ticket
curl -X POST -H "X-API-Key: your_key" http://localhost:3000/api/sync/trigger/time_entry

API Endpoints

Health & Monitoring

  • GET /health - Service health check (unauthenticated)
  • GET /api/sync/status - Sync job status and recent operations
  • GET /api/stats/database - Database statistics

All /api/* endpoints require X-API-Key header or Authorization: Bearer <key>.

Analytics (Grafana Integration)

  • GET /api/analytics/contract-profitability - Contract profitability metrics
  • GET /api/analytics/ticket-metrics?timeframe=7d - Ticket analytics
  • GET /api/analytics/time-metrics?timeframe=30d - Time entry analytics
  • GET /api/analytics/company-health - Company health metrics

Management

  • POST /api/sync/trigger/:entityType - Trigger manual sync
  • POST /api/discovery/run - Run API discovery analysis

Database Schema

Core Tables

  • companies - Customer/vendor information
  • contacts - Contact details linked to companies
  • resources - Autotask users (employees, contractors)
  • contracts - Service agreements with companies
  • projects - Work projects under contracts
  • tickets - Support tickets and issues
  • time_entries - Time tracking and billing records
  • configuration_items - Assets and devices

System Tables

  • sync_operations - Track sync job execution
  • sync_state - Maintain sync timestamps per entity
  • entity_history - Historical change tracking

Analytics Views

  • contract_profitability - Revenue vs. cost analysis
  • resource_utilization - Resource hours, billable percentage, projects worked
  • ticket_sla_performance - SLA compliance per company

Grafana Integration

Sample Queries

Contract Profitability

SELECT 
  contract_name,
  revenue_percentage,
  hours_percentage,
  actual_revenue
FROM contract_profitability 
WHERE revenue_percentage IS NOT NULL
ORDER BY revenue_percentage DESC

Ticket Volume Over Time

SELECT 
  date,
  total_tickets,
  open_tickets,
  avg_resolution_hours
FROM (
  SELECT 
    DATE_TRUNC('day', created_date) as date,
    COUNT(*) as total_tickets,
    COUNT(*) FILTER (WHERE status IN (1,2,3)) as open_tickets,
    AVG(EXTRACT(EPOCH FROM (COALESCE(completion_date, NOW()) - created_date)) / 3600) as avg_resolution_hours
  FROM tickets 
  WHERE created_date > NOW() - INTERVAL '30 days'
  GROUP BY DATE_TRUNC('day', created_date)
) t
ORDER BY date

Dashboard Examples

  1. Executive Dashboard

    • Contract profitability overview
    • Company health metrics
    • Revenue trends
  2. Operational Dashboard

    • Active ticket counts
    • Response time metrics
    • Resource utilization
  3. Financial Dashboard

    • Billable hours tracking
    • Revenue vs. cost analysis
    • Profit margin trends

Development

Project Structure

src/
├── config/           # Configuration management
├── database/         # Database connection and migration
├── logger/           # Logging infrastructure  
├── services/         # Core business logic
│   ├── discovery.ts  # API discovery and analysis
│   ├── sync.ts       # Data synchronization
│   └── web.ts        # Web API service
├── autotask/         # Autotask API client
└── index.ts          # Application entry point

tests/                # Test suites
plans/                # Project planning documents
prompt_logs/          # Development prompt logs

Testing Strategy

Tests are organized by functionality level:

  1. Unit Tests: Individual component testing
  2. Integration Tests: Database and API integration
  3. End-to-End Tests: Full sync workflow validation
# Run all tests
npm test

# Run specific test suites
npm test -- --grep "Database"
npm test -- --grep "Integration"

Code Quality

  • ESLint: Code linting and style enforcement
  • TypeScript: Static type checking
  • Jest: Testing framework
  • Pre-commit hooks: Automated testing before commits

Deployment

Docker Deployment

# docker-compose.yml
version: '3.8'
services:
  autotask-data-sink:
    build: .
    environment:
      - NODE_ENV=production
      - DB_HOST=postgres
    depends_on:
      - postgres
    ports:
      - "3000:3000"
      
  postgres:
    image: postgres:14
    environment:
      - POSTGRES_DB=autotask_data_sink
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=password
    volumes:
      - postgres_data:/var/lib/postgresql/data
      
volumes:
  postgres_data:

Production Considerations

  • Environment Variables: Use secure secret management
  • Database: Configure backup and recovery procedures
  • Monitoring: Set up alerts for sync failures and performance issues
  • Logging: Configure log aggregation and rotation
  • Security: Implement network security and access controls

Performance Tuning

Database Optimization

  • Indexes: Proper indexing on foreign keys and date columns
  • Connection Pooling: Configured for optimal concurrency
  • Query Optimization: Use EXPLAIN ANALYZE for slow queries
  • Materialized Views: Pre-computed analytics for better performance

API Rate Limiting

  • Configurable delays between API calls
  • Exponential backoff on errors
  • Batch processing for large datasets
  • Pagination handling for memory efficiency

Troubleshooting

Common Issues

Sync Failures

  • Check Autotask API credentials
  • Verify network connectivity
  • Review API rate limits
  • Check database connection

Performance Issues

  • Monitor database query performance
  • Check sync job overlap
  • Verify adequate system resources
  • Review log files for bottlenecks

Data Inconsistencies

  • Run discovery analysis
  • Check sync state timestamps
  • Verify field mappings
  • Review error logs

Logging

Logs are structured and categorized by component:

  • Application: General application events
  • Database: Database operations and errors
  • Sync: Synchronization activities
  • Autotask: API interactions
  • Discovery: Discovery analysis

Contributing

  1. Follow the existing code style and patterns
  2. Add tests for new functionality
  3. Update documentation for significant changes
  4. Use descriptive commit messages

License

[License information]

Support

For issues and questions:

  • Check the troubleshooting guide
  • Review log files for error details
  • Open an issue with detailed information
  • Include configuration (sanitized) and error logs

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors