Skip to content

dcox79/ai-it-investigator

Repository files navigation

IT Investigation Platform

AI-assisted IT investigation platform that combines multi-channel evidence retrieval with LLM-powered reasoning to produce grounded, source-attributed investigation reports.

Version 2.0.0 | Getting Started | Documentation | API Reference

Dashboard

Features

Investigation Engine

  • Agentic investigation loop powered by LangGraph with sub-agent orchestration
  • 8 retrieval channels: Web Search, Vendor Docs, GitHub, Meeting Notes (Granola), Reddit, Stack Overflow, HackerNews, YouTube
  • Artifact analysis -- upload logs, configs, PDFs, and images for direct LLM analysis
  • Evidence weighted by source authority (artifacts > meeting notes > vendor docs > GitHub > community > web search)
  • Pipeline: agentic retrieval, meeting decomposition, relevance filtering, deep fetch enrichment, evidence reconciliation into ranked hypotheses

Platform

  • Pre-investigation triage to refine scope before committing resources
  • Cross-investigation memory via pgvector for learning across cases
  • Full-text search across cases, findings, and hypotheses
  • Multi-case concurrency (up to 5 simultaneous investigations)
  • Real-time progress via SSE streaming
  • Case export and import
  • Audit trail for all investigation activity

Integrations and Tooling

  • OAuth integrations for GitHub and Granola (DCR + PKCE)
  • MCP server for use with Claude Code, Claude Desktop, or any MCP client
  • Optional local LLM support via Ollama for fast-tier tasks
  • Prompt caching and LLM cost tracking
  • LangSmith tracing support
  • CLI interface
  • Interactive API docs at http://localhost:8000/docs

Quick Start

Prerequisites

Setup

Linux / macOS / Git Bash:

git clone https://github.com/dcox79/ai-it-investigator.git
cd ai-it-investigator
chmod +x setup.sh
./setup.sh

Windows PowerShell:

git clone https://github.com/dcox79/ai-it-investigator.git
cd ai-it-investigator
.\setup.ps1

The setup script will prompt for your API keys, build the containers, and run database migrations. Once complete, open http://localhost:3000.

Usage

  1. Create a case -- describe the IT issue you are investigating (e.g., "Azure AD sync failures after tenant migration")
  2. Upload artifacts -- attach relevant logs, config files, PDFs, or screenshots for direct analysis
  3. Investigate -- the platform searches across all channels, analyzes artifacts, and reconciles evidence into ranked hypotheses
  4. Review findings -- each finding shows its relevance score, source channel, and supporting evidence
  5. Generate report -- produce a source-attributed investigation report with actionable recommendations

Screenshots

Investigation with Evidence Requests and Ranked Hypotheses

Case Detail -- Evidence Requests

The platform pauses investigations to request specific evidence, then resumes with enriched context.

Investigation Report with Findings

Reporting View

Completed investigation showing findings across multiple channels with relevance scores and source attribution.

Reconciliation -- Ranked Hypotheses

Reconciliation

Evidence reconciliation produces ranked hypotheses with confidence bands. Endorse, dismiss, or challenge each hypothesis.

Settings and Local LLM Configuration

Settings

Configure API connections, investigation defaults, and optional local LLM routing for cost savings on fast-tier tasks.

Documentation

Contributing

Contributions are welcome. Please see CONTRIBUTING.md for guidelines on submitting issues and pull requests.

License

This project is licensed under the MIT License. See LICENSE for details.

About

AI-assisted IT investigation platform — multi-channel evidence retrieval with LLM reasoning for grounded, source-attributed investigation reports

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors