This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
TMI is a Go-based service implementing the REST API and store for managing a security review process, from request (intake) through analysis and followup. The review process focuses on a threat modeling approach, with collaborative data flow diagram creation and artifacts that can be created, read or updated by either machines or humans, interchangeably. The application is designed to be easy to integrate with and extend without having to make code modifications. The REST API is an instantiation of a protocol defined in an OpenAPI 3 protocol specification; the specification is the source of truth. Real-time collaborative diagram editing is implemented via WebSockets; the WS protocol is authoritatively defined in an AsyncAPI specification. The Application features OAuth or SAML authentication with JWT, role-based access control with roles assigned to users or groups, and persistent database stores implemented via a GORM interface.
- api-schema/tmi-openapi.json - OpenAPI specification
- api/store.go - Generic typed map storage implementation
- api/server.go - Main API server with WebSocket support
- api/websocket.go - WebSocket hub for real-time collaboration
- cmd/server/main.go - Server entry point
- Makefile - Build automation with development targets
api/- API handlers, server implementation, and storageauth/- Authentication service with OAuth, JWT, and RBACcmd/- Command-line executables (server, migrate, cats-seed)internal/- Internal packages (logging, dbschema)docs/- Legacy documentation (deprecated - see Documentation section below)scripts/- Development setup scripts
- Use the generic Store[T] implementation from api/store.go
- Each entity type has its own store instance (DiagramStore, ThreatModelStore)
- Store provides CRUD operations with proper concurrency control
- Entity fields should be properly validated before storage
- Use WithTimestamps interface for entities with created_at/modified_at fields
- Real-time collaboration via WebSocket connections at
/ws/diagrams/{id} - WebSocketHub manages active connections and broadcasts updates
- Only diagrams support real-time collaboration, not threat models
- Uses Gorilla WebSocket library
- Session lifecycle: Active -> Terminating -> Terminated states
- Host-based control: Only session host can manage participants
- Inactivity timeout: Configurable (default 300s, minimum 15s)
- Deny list: Session-specific tracking of removed participants
- PostgreSQL for persistent storage (configured via auth/ package)
- Redis for caching and session management
- Database migrations in auth/migrations/
- Dual-mode storage: in-memory for tests, database-backed for dev/prod
- Redis-backed caching with invalidation, warming, and metrics (api/cache_service.go)
- Automatic cache invalidation on resource updates
- Cache metrics tracking (hits, misses, size monitoring)
- API code generated from api-schema/tmi-openapi.json using oapi-codegen v2
- Uses Gin web framework (not Echo) with oapi-codegen/gin-middleware for validation
- OpenAPI validation middleware clears security schemes (auth handled by JWT middleware)
- Generated types in api/api.go include Gin server handlers and embedded spec
- Config file: oapi-codegen-config.yml (configured for gin-middleware package)
- Validate schema:
make validate-openapi(jq + Vacuum with OWASP rules) - Validation output:
api-schema/openapi-validation-report.json - Public Endpoints: 17 endpoints (OAuth, OIDC, SAML) marked with
x-public-endpointvendor extension - intentionally unauthenticated per RFCs
CRITICAL: Never use generated FromNode, MergeNode, FromMinimalNode, or MergeMinimalNode methods in non-generated code. These methods are in api/api.go and are regenerated by oapi-codegen. They hardcode the shape discriminator to an arbitrary fixed value, corrupting cell shapes (e.g., all node shapes become "text-box"). This is an oapi-codegen limitation when multiple discriminator values map to the same type.
- Always use:
SafeFromNode()andSafeFromEdge()fromapi/cell_union_helpers.go - Lint check:
make check-unsafe-union-methods(also runs as part ofmake lint) - Safe methods:
FromEdge/MergeEdge(only one edge shape "flow") andFromDfdDiagram(only one diagram type "DFD-1.0.0") are safe to use directly - Affected union types:
DfdDiagram_Cells_Item,DfdDiagramInput_Cells_Item,MinimalCell
The system uses a single-router architecture with OpenAPI-driven routing:
- Single Router Architecture: All HTTP requests flow through the OpenAPI specification
- Request Tracing: Comprehensive module-tagged debug logging for all requests
- Authentication Flow:
- JWT middleware validates tokens and sets user context
- ThreatModelMiddleware and DiagramMiddleware handle resource-specific authorization
- Auth handlers integrate cleanly with OpenAPI endpoints
- No Route Conflicts: Single source of truth for all routing eliminates duplicate route registration panics
HTTP Request -> OpenAPI Route Registration -> ServerInterface Implementation ->
JWT Middleware -> Auth Context -> Resource Middleware -> Endpoint Handlers
Key Components:
api/server.go: Main OpenAPI server with single routerapi/*_middleware.go: Resource-specific authorization middlewareauth/handlers.go: Authentication endpoints integrated via auth service adapterapi/request_tracing.go: Module-tagged request logging for debugging
- List targets:
make list-targets(lists all available make targets) - Build:
make build-server(creates bin/tmiserver executable) - Lint:
make lint(runs golangci-lint) - Generate API:
make generate-api(uses oapi-codegen with config from oapi-codegen-config.yml) - Development:
make start-dev(starts full dev environment with DB and Redis on localhost) - Clean all:
make clean-everything(comprehensive cleanup of processes, containers, and files) - Health check: Use
curl http://localhost:8080/(root endpoint) to verify server is running or check running version. There is no /health endpoint. - Validate AsyncAPI:
make validate-asyncapi
Container builds use Python scripts (scripts/build-app-containers.py, scripts/build-db-containers.py) wrapped by Makefile targets. Supports local Docker, OCI, AWS, Azure, GCP, and Heroku targets.
- Build individual containers:
make build-server-container(TMI server container only)make build-redis-container(Redis container only)make build-db(PostgreSQL container only)
- Build all containers:
make build-all - Build with scanning:
make build-all-scan - Security scan existing images:
make scan-containers - Build and start dev environment:
make start-containers-environment - Cloud builds (build + push + scan):
make build-app-oci(OCI Container Registry)make build-app-aws(AWS ECR)make build-app-azure(Azure ACR)make build-app-gcp(GCP Artifact Registry)make build-app-heroku(Heroku Container Registry)
- Always use:
make start-database,make start-redis,make start-devfor container operations
TMI uses Chainguard images for local/generic builds: cgr.dev/chainguard/static:latest (server), cgr.dev/chainguard/postgres:latest (DB), Chainguard Redis. Built with CGO_ENABLED=0 (~57MB total). OCI builds use Oracle Linux 9 base images with Oracle Instant Client for ADB support.
- Go app:
make generate-sbom(cyclonedx-gomod) - Containers: Auto-generated when using
--scanflag on container builds (Syft) - Output:
security-reports/sbom/(CycloneDX 1.6 JSON + XML)
Arazzo specification (OpenAPI Initiative) documents API workflow sequences and dependencies.
- Generate:
make generate-arazzo| Validate:make validate-arazzo - Output:
api-schema/tmi.arazzo.yamlandapi-schema/tmi.arazzo.json - Docs:
api-schema/arazzo-generation.md
-
Database Reset:
make reset-db-heroku- Drop and recreate Heroku database schema (DESTRUCTIVE)- Script location:
scripts/heroku-reset-database.sh - Documentation:
docs/operator/heroku-database-reset.md - WARNING: Deletes all data - requires manual "yes" confirmation
- Use cases: Schema out of sync, migration errors, clean deployment testing
- Performs three steps: Drop schema -> Run migrations -> Verify schema
- Verifies critical columns (e.g.,
issue_uriinthreat_models) - Post-reset: Users must re-authenticate via OAuth
- Script location:
-
Database Drop:
make drop-db-heroku- Drop Heroku database schema leaving it empty (DESTRUCTIVE)- Script location:
scripts/heroku-drop-database.sh - WARNING: Deletes all data and leaves database in empty state - requires manual "yes" confirmation
- Use cases: Manual schema control, testing migration process from scratch, preparing for custom schema
- Performs one step: Drop schema only (no migrations)
- Database left with empty
publicschema, ready for manual schema creation or migrations - To restore: Run
make reset-db-herokuor restart Heroku app to trigger auto-migrations
- Script location:
MANDATORY: Always use make targets for testing. Never run go test commands directly. Never disable/skip failing tests - investigate and fix root cause.
- Unit tests:
make test-unit(fast tests, no external dependencies)- Specific test:
make test-unit name=TestName - Options:
make test-unit count1=true passfail=true
- Specific test:
- Integration tests:
- PostgreSQL:
make test-integrationormake test-integration-pg - Oracle ADB:
make test-integration-oci(requiresscripts/oci-env.sh)
- PostgreSQL:
- Coverage:
make test-coverage(generates combined coverage reports)
CATS performs security fuzzing of the TMI API with automatic OAuth authentication.
- Run:
make cats-fuzz| Analyze:make analyze-cats-results - Custom user:
make cats-fuzz-user USER=alice - Output:
test/outputs/cats/(JSON reports + SQLite database) - Perform all analysis by querying the SQLite database; don't read the html or json files
False Positive Handling: Public endpoints (17) and cacheable endpoints (6) use vendor extensions (x-public-endpoint, x-cacheable-endpoint) to skip inapplicable fuzzers. OAuth 401/403 responses auto-filtered via is_oauth_false_positive flag.
OAuth 2.0 testing harness with PKCE (RFC 7636) support for manual and automated flows. Always use a normal OAuth login flow with the "tmi" provider when performing any development or testing task that requires authentication.
- Start:
make start-oauth-stub| Stop:make oauth-stub-stop - Location:
scripts/oauth-client-callback-stub.py - Logs:
/tmp/oauth-stub.log
Key Endpoints:
| Endpoint | Purpose |
|---|---|
POST /oauth/init |
Initialize OAuth flow, returns authorization URL |
POST /flows/start |
Start automated e2e flow, returns flow_id |
GET /flows/{id} |
Poll flow status and retrieve tokens |
GET /creds?userid=X |
Retrieve saved credentials for user |
POST /refresh |
Refresh access token |
Quick JWT Retrieval:
make start-oauth-stub
curl -X POST http://localhost:8079/flows/start -H 'Content-Type: application/json' -d '{"userid": "alice"}'
# Wait for flow completion, then:
curl "http://localhost:8079/creds?userid=alice" | jq '.access_token'- By convention, we use "charlie" as the user name for a user with the administrator role, and other common user names (
alice,bob, etc.) as needed for other users.
Standalone Go application for testing WebSocket collaborative features.
- Build:
make build-wstest| Run:make wstest| Clean:make wstest-clean - Location:
wstest/directory
./wstest --user alice --host --participants "bob,charlie" # Host mode
./wstest --user bob # Participant modeThe TMI OAuth provider supports login_hints for automation-friendly testing with predictable user identities:
- Parameter:
login_hint- Query parameter for/oauth2/authorize?idp=tmi - Purpose: Generate predictable test users instead of random usernames
- Format: 3-20 characters, alphanumeric + hyphens, case-insensitive
- Validation: Pattern:
^[a-zA-Z0-9-]{3,20}$ - Scope: TMI provider only, not available in production builds
Examples:
# Create user 'alice@tmi.local' with name 'Alice (TMI User)'
curl "http://localhost:8080/oauth2/authorize?idp=tmi&login_hint=alice"
# Without login_hint - generates random user like 'testuser-12345678@tmi.local'
curl "http://localhost:8080/oauth2/authorize?idp=tmi"
# With OAuth callback stub
curl "http://localhost:8080/oauth2/authorize?idp=tmi&login_hint=alice&client_callback=http://localhost:8079/"OAuth 2.0 Client Credentials Grant (RFC 6749 Section 4.4) for webhooks, addons, and automation.
Pattern: Like GitHub PATs - secret only shown once at creation, full API access as creating user.
API Endpoints:
| Endpoint | Purpose |
|---|---|
POST /me/client_credentials |
Create credential (returns secret once) |
GET /me/client_credentials |
List credentials (no secrets) |
DELETE /me/client_credentials/{id} |
Delete and revoke credential |
Token Exchange:
curl -X POST http://localhost:8080/oauth2/token \
-d "grant_type=client_credentials" -d "client_id=tmi_cc_..." -d "client_secret=..."
# Returns: {"access_token": "...", "token_type": "Bearer", "expires_in": 3600}Security: Client ID format tmi_cc_*, bcrypt-hashed secrets, 1-hour token lifetime, JWT subject sa:{id}:{owner}.
MANDATORY: Always use Make targets - NEVER run commands directly
- NEVER run:
go run,go test,./bin/tmiserver,docker run,docker exec - ALWAYS use:
make start-dev,make test-unit,make test-integration,make build-server - Reason: Make targets provide consistent, repeatable configurations with proper environment setup
MANDATORY: No HTTP 500 errors may go unaddressed. Our goal is that once TMI is released, customers will never see a 500 error in production.
- Every 500 error discovered in testing (unit, integration, API, or CATS fuzzing) must be investigated and fixed before release
- When a 500 error is found, create a GitHub issue labeled
bugand prioritize it for the current release milestone - 500 errors indicate unhandled conditions in server code — they should be replaced with appropriate 4xx responses (400, 404, 409, etc.) or handled gracefully
- CATS fuzzing results must be analyzed for true-positive 500 errors after filtering false positives
- Do not dismiss 500 errors as "edge cases" or "fuzzer artifacts" — if the server can return 500, it will happen in production
TMI has a separate client application (tmi-ux). When investigating a problem, if you determine the root cause is in the client rather than the server, you MUST:
- Stop work on the current task.
- Explain to the user why you believe the problem is a client bug (include specific evidence).
- Ask the user: "This appears to be a client bug. Would you like me to file a bug against tmi-ux?"
- If the user confirms, use the
/file-client-bugskill to create the issue. - After filing, resume the server-side task if there is remaining server work, or report that the task is blocked on the client fix.
Signs that a problem is a client bug:
- The server is responding correctly per the OpenAPI specification, but the client mishandles the response
- The client is sending malformed requests, missing required fields, or using incorrect content types
- The client is not following the authentication/authorization flow correctly
- The client is not handling error responses (4xx/5xx) as documented in the API spec
- Test failures or CATS results indicate the server behavior is correct but the client expectation is wrong
When completing any task involving code changes, follow this checklist:
- Run
make lintand fix any linting issues (required for ALL file changes) - If OpenAPI spec (
api-schema/tmi-openapi.json) was modified:- Run
make validate-openapiand fix any issues - Run
make generate-apito regenerate API code
- Run
- If any Go files were modified (including regenerated
api/api.go):- Run
make build-serverand fix any build issues - Run
make test-unitand fix any test failures - For API functionality, also run
make test-integration
- Run
- Build and test steps are NOT required when only non-Go files are modified
- Suggest a conventional commit message
- If the task is associated with a GitHub issue, the task is NOT complete until:
- The commit that resolves the issue references the issue (e.g.,
Fixes #123orCloses #123in the commit message body) - The issue is closed as "done"
- The commit that resolves the issue references the issue (e.g.,
- Format code with
gofmt - Group imports by standard lib, external libs, then internal packages
- Use camelCase for variables, PascalCase for exported functions/structs
- Error handling: check errors and return with context
- Prefer interfaces over concrete types for flexibility
- Document all exported functions with godoc comments
- Structure code by domain (auth, diagrams, threats)
- Follow OpenAPI 3.0.3 specification standards
- Use snake_case for API JSON properties
- Include descriptions for all properties and endpoints
- Document error responses (401, 403, 404)
- Use UUID format for IDs, ISO8601 for timestamps
- Role-based access with reader/writer/owner permissions
- Bearer token auth with JWT
- JSON Patch for partial updates
- WebSocket for real-time collaboration
- Pagination with limit/offset parameters
CRITICAL: Never use the standard log package. Always use structured logging.
- ALWAYS use
github.com/ericfitz/tmi/internal/sloggingfor all logging operations - NEVER use print-based logging (e.g.,
fmt.Println) in any Go code - NEVER import or use the standard
logpackage ("log") in any Go code - Use
slogging.Get()for global logging orslogging.Get().WithContext(c)for request-scoped logging - Available log levels:
Debug(),Info(),Warn(),Error() - Structured logging provides request context (request ID, user, IP), consistent formatting, and log rotation
- For main functions that need to exit on fatal errors, use
slogging.Get().Error()followed byos.Exit(1)instead oflog.Fatalf()
TMI uses staticcheck for Go code quality analysis. The project has intentionally kept some staticcheck warnings:
-
Auto-Generated Code:
api/api.gocontains many ST1005 warnings (capitalized error strings)- File is generated by oapi-codegen from OpenAPI specification
- Manual edits would be overwritten on next OpenAPI regeneration
- Expected behavior: These warnings are acceptable and should be ignored
-
Running Staticcheck:
staticcheck ./...- Shows all issues (including expected ones)staticcheck ./... | grep -v "api/api.go"- Filter out auto-generated code warnings- Expected count: 338 issues (all in auto-generated api/api.go)
- Use the format:
<type>(<scope>): <description> - Types:
feat,fix,docs,style,refactor,test,chore,perf,ci,build,revert,deps(dependencies) - Scope: Optional, indicates the area of change (e.g.,
api,auth,websocket) - Description: Brief summary in imperative mood (e.g., "add user deletion endpoint" not "added" or "adds")
- Examples:
feat(api): add WebSocket heartbeat mechanismfix(auth): correct JWT token expiration validationdocs(readme): update OAuth setup instructionsrefactor(websocket): simplify hub message broadcastingtest(integration): add database connection pooling testsdeps: update Gin framework to v1.11.0
TMI uses automatic semantic versioning (0.MINOR.PATCH) based on conventional commits:
- Feature commits (
feat:): Post-commit hook increments MINOR version, resets PATCH to 0 (0.9.3 -> 0.10.0) - All other commits (
fix:,refactor:, etc.): Post-commit hook increments PATCH version (0.9.0 -> 0.9.1) - Version file:
.version(JSON) tracks current state - Script:
scripts/update-version.sh --commit(automatically called by post-commit hook)
Version updates are fully automated. All feature development occurs in release/<semver-rc.0>/ branches; those branches are not auto-versioned so that new features don't bump the semantic version multiple times during development of a single feature or release. The main branch only gets direct commits for patching, security fixes, and merging of release branches.
The jq command-line JSON processor is available and should be auto-approved via Bash(jq:*) pattern for all JSON file manipulation tasks. Use jq for:
- Files > 100KB (streaming, surgical updates)
- Complex filtering and transformations
- Validation and format verification
When working with JSON files larger than 100KB, use streaming approaches with jq to prevent memory issues:
- Check file size first:
stat -f%z file.json 2>/dev/null || stat -c%s file.json - Create backups before modifications:
cp file.json file.json.$(date +%Y%m%d_%H%M%S).backup - Validate after changes:
jq empty modified.json && echo "Valid" || echo "Invalid"
Activation Triggers: JSON files >= 100KB, memory errors or slow performance, surgical path updates needed, batch operations across multiple JSON files, or user mentions "large", "efficient", "streaming", or "without loading entire file".
- Copy
.env.exampleto.env.devfor local development - Uses PostgreSQL and Redis Docker containers
- Development scripts handle container management automatically
- Server runs on port 8080 by default with configurable TLS support
- Logs: In development and test, logs are written to
logs/tmi.login the project directory - Local dev database credentials: Connection info (including database URL with user, password, host, port, and database name) is in
config-development.ymlin the project root under thedatabase.urlkey
IMPORTANT: All project documentation is maintained in the GitHub Wiki. Do NOT update markdown files in the docs/ directory - they are deprecated and will be removed.
Do not update or add any content to the docs/ directory. Instead, update or add the content to the appropriate page on the tmi wiki.
- Authoritative documentation: GitHub Wiki (https://github.com/ericfitz/tmi/wiki)
- Local
docs/directory: Deprecated, do not update
- Run python scripts with uv. When creating python scripts, add uv toml to the script for automatic package management.
When ending a work session, you MUST complete ALL steps below. Work is NOT complete until git push succeeds.
MANDATORY WORKFLOW:
- File issues for remaining work - Create issues for anything that needs follow-up
For any change: 2. Run general quality gates: formatters and linters. Where possible, use go-provided tools to fix formatting issues, rather than editing files manually.
For any code changes: 3. Run code quality gates: build, unit tests 4. Security review: run the security-review skill. If any issues are reported: stop, report the issues to the user, and ask the user what to do.
For api changes only: 5. Run api tests: integration, postman/newman api tests, and cats fuzz tests. Fix any integration or postman test failures. Use the make target to analyze cats results and prepare a plan for the user with your recommendations how to address any true positive errors or warnings. Stop and review the plan with the user.
For all changes: 6. Update issue status: Close finished work, update in-progress items 7. Commit the change locally: use a conventional commit message as documented earlier in this file. 8. PUSH TO REMOTE - This is MANDATORY:
git pull --rebase
git push
git status # MUST show "up to date with origin"- Clean up - Clear stashes, prune remote branches
- Verify - All changes committed AND pushed
- Hand off - Provide context for next session
CRITICAL RULES:
- Work is NOT complete until
git pushsucceeds - NEVER stop before pushing - that leaves work stranded locally
- NEVER say "ready to push when you are" - YOU must push
- If push fails, resolve and retry until it succeeds
- NEVER attempt to manipulate or otherwise interact with ssh keys or ssh-agent. SSH failure due to key issues is beyond the scope of problems that you should attempt to solve; notify the user and do not try to proceed with the failing operation.