Skip to content

Timmy AI Assistant

Eric Fitzgerald edited this page Apr 8, 2026 · 3 revisions

Timmy AI Assistant

Timmy is TMI's planned conversational AI assistant for threat model analysis. Timmy operates within the scope of a single threat model and reasons over its data -- assets, threats, diagrams, documents, repositories, and notes -- to help you understand, analyze, and improve your threat models.

Status: Timmy is under active development on the dev/1.4.0 branch. The data model foundations and backend infrastructure are in place, but the chat API endpoints, LLM integration, and frontend chat UI are not yet implemented. See Implementation Status below for details.

Development demo video (YouTube)

Purpose

Timmy is inspired by Google's NotebookLM: a "grounded" chat that reasons over specific sources rather than answering from general knowledge alone. You control which sub-entities are included in the conversation via the timmy_enabled flag on each sub-entity, allowing you to focus the discussion on relevant material.

Problems Timmy Solves

Threat models are dense and hard to reason about holistically

A mature threat model contains dozens of assets, threats, data flows, and supporting documents. Humans struggle to hold all of that in mind simultaneously. Timmy can synthesize across the full model and surface connections, gaps, or inconsistencies that a person might miss.

Security review is bottlenecked on expert availability

Not every team has a senior security reviewer on hand. Timmy acts as an always-available collaborator -- it cannot replace a human reviewer, but it can help teams self-serve on initial analysis, ask better questions, and arrive at a review better prepared.

Threat modeling artifacts are underutilized after creation

Teams build threat models and then rarely revisit them conversationally. Timmy makes the model queryable: "What are the highest-risk data flows?", "Which assets lack mitigations?", "Summarize the threats related to authentication."

Onboarding to an existing threat model is slow

A new team member or reviewer joining a threat model must read through everything. Timmy can provide guided summaries and answer targeted questions, dramatically reducing ramp-up time.

How Users Will Interact with Timmy

You navigate to a threat model's chat page, see your sources (sub-entities) in a sidebar, toggle which ones to include, and have a conversation. You can ask Timmy to:

  • Analyze threats -- identify highest-risk areas, evaluate threat severity, and assess coverage
  • Identify gaps -- find assets without threats, threats without mitigations, and incomplete data flows
  • Explain data flows -- summarize how data moves through the system based on DFD diagrams
  • Suggest mitigations -- recommend security controls based on identified threats
  • Summarize content -- provide overviews of the threat model or specific sub-entities
  • Answer questions -- respond to targeted queries about any aspect of the threat model

Previous chat sessions will be preserved and can be resumed.

Implementation Status

Completed (dev/1.4.0)

  • timmy_enabled field on all threat model sub-entity types: diagrams, assets, threats, documents, notes, and repositories. Defaults to true. Also present on team notes and project notes
  • Database models for chat sessions, messages, embeddings, and usage tracking (TimmySession, TimmyMessage, TimmyEmbedding, TimmyUsage)
  • Database schema definitions for the four Timmy tables with proper indexes and foreign key constraints
  • Server configuration (TimmyConfig) with settings for LLM provider/model, embedding provider/model, retrieval parameters, rate limits, memory budgets, and chunking. Timmy is disabled by default
  • Content provider abstraction (ContentProvider interface and ContentProviderRegistry) for extracting plain text from source entities for embedding
  • SSRF validator for safely fetching external document URLs during content extraction
  • Import/export support in the frontend for the timmy_enabled field
  • Chat API endpoints -- no REST routes for creating sessions, sending messages, or listing history
  • LLM integration -- no LangChainGo integration or provider adapters
  • Vector store / embedding pipeline -- the data model exists, but no code to compute, store, or query embeddings
  • Frontend chat UI -- no Angular components for the chat page, source sidebar, or session management

Architecture Decisions

Key decisions from the backend design discussion:

  1. LLM integration: Provider-agnostic via LangChainGo, allowing operators to choose their LLM provider.
  2. Vector store: In-memory HNSW index with database-serialized embeddings (rows-per-embedding). No separate vector database required.
  3. Conversation storage: Normal relational tables in the existing threat model database.
  4. Memory management: Explicit budget with LRU eviction and session admission control under memory pressure.
  5. Scope: One vector index per threat model, loaded on demand, evicted after inactivity.

Related Pages

Related Issues

Clone this wiki locally