-
Notifications
You must be signed in to change notification settings - Fork 2
Timmy AI Assistant
Timmy is TMI's planned conversational AI assistant for threat model analysis. Timmy operates within the scope of a single threat model and reasons over its data -- assets, threats, diagrams, documents, repositories, and notes -- to help you understand, analyze, and improve your threat models.
Status: Timmy is under active development on the
dev/1.4.0branch. The data model foundations and backend infrastructure are in place, but the chat API endpoints, LLM integration, and frontend chat UI are not yet implemented. See Implementation Status below for details.
Development demo video (YouTube)
Timmy is inspired by Google's NotebookLM: a "grounded" chat that reasons over specific sources rather than answering from general knowledge alone. You control which sub-entities are included in the conversation via the timmy_enabled flag on each sub-entity, allowing you to focus the discussion on relevant material.
A mature threat model contains dozens of assets, threats, data flows, and supporting documents. Humans struggle to hold all of that in mind simultaneously. Timmy can synthesize across the full model and surface connections, gaps, or inconsistencies that a person might miss.
Not every team has a senior security reviewer on hand. Timmy acts as an always-available collaborator -- it cannot replace a human reviewer, but it can help teams self-serve on initial analysis, ask better questions, and arrive at a review better prepared.
Teams build threat models and then rarely revisit them conversationally. Timmy makes the model queryable: "What are the highest-risk data flows?", "Which assets lack mitigations?", "Summarize the threats related to authentication."
A new team member or reviewer joining a threat model must read through everything. Timmy can provide guided summaries and answer targeted questions, dramatically reducing ramp-up time.
You navigate to a threat model's chat page, see your sources (sub-entities) in a sidebar, toggle which ones to include, and have a conversation. You can ask Timmy to:
- Analyze threats -- identify highest-risk areas, evaluate threat severity, and assess coverage
- Identify gaps -- find assets without threats, threats without mitigations, and incomplete data flows
- Explain data flows -- summarize how data moves through the system based on DFD diagrams
- Suggest mitigations -- recommend security controls based on identified threats
- Summarize content -- provide overviews of the threat model or specific sub-entities
- Answer questions -- respond to targeted queries about any aspect of the threat model
Previous chat sessions will be preserved and can be resumed.
-
timmy_enabledfield on all threat model sub-entity types: diagrams, assets, threats, documents, notes, and repositories. Defaults totrue. Also present on team notes and project notes -
Database models for chat sessions, messages, embeddings, and usage tracking (
TimmySession,TimmyMessage,TimmyEmbedding,TimmyUsage) - Database schema definitions for the four Timmy tables with proper indexes and foreign key constraints
-
Server configuration (
TimmyConfig) with settings for LLM provider/model, embedding provider/model, retrieval parameters, rate limits, memory budgets, and chunking. Timmy is disabled by default -
Content provider abstraction (
ContentProviderinterface andContentProviderRegistry) for extracting plain text from source entities for embedding - SSRF validator for safely fetching external document URLs during content extraction
-
Import/export support in the frontend for the
timmy_enabledfield - Chat API endpoints -- no REST routes for creating sessions, sending messages, or listing history
- LLM integration -- no LangChainGo integration or provider adapters
- Vector store / embedding pipeline -- the data model exists, but no code to compute, store, or query embeddings
- Frontend chat UI -- no Angular components for the chat page, source sidebar, or session management
Key decisions from the backend design discussion:
- LLM integration: Provider-agnostic via LangChainGo, allowing operators to choose their LLM provider.
- Vector store: In-memory HNSW index with database-serialized embeddings (rows-per-embedding). No separate vector database required.
- Conversation storage: Normal relational tables in the existing threat model database.
- Memory management: Explicit budget with LRU eviction and session admission control under memory pressure.
- Scope: One vector index per threat model, loaded on demand, evicted after inactivity.
- Architecture-and-Design -- System architecture and design decisions
- REST-API-Reference -- API endpoint reference
- Server backend: ericfitz/tmi#214
- Client UX: ericfitz/tmi-ux#293
- Using TMI for Threat Modeling
- Accessing TMI
- Authentication
- Creating Your First Threat Model
- Understanding the User Interface
- Working with Data Flow Diagrams
- Managing Threats
- Collaborative Threat Modeling
- Using Notes and Documentation
- Timmy AI Assistant
- Metadata and Extensions
- Planning Your Deployment
- Terraform Deployment (AWS, OCI, GCP, Azure)
- Deploying TMI Server
- OCI Container Deployment
- Certificate Automation
- Deploying TMI Web Application
- Setting Up Authentication
- Database Setup
- Component Integration
- Post-Deployment
- Branding and Customization
- Monitoring and Health
- Cloud Logging
- Database Operations
- Security Operations
- Performance and Scaling
- Maintenance Tasks
- Getting Started with Development
- Architecture and Design
- API Integration
- Testing
- Contributing
- Extending TMI
- Dependency Upgrade Plans
- DFD Graphing Library Reference
- Migration Instructions