A lightweight, local-first tracing framework for LLM applications with an interactive terminal visualizer. Inspired by LangSmith but simplified for local development and debugging.
traceTTY captures execution traces (inputs, outputs, timing, errors) from your LLM applications and stores them as human-readable JSONL files. The included terminal-based visualizer lets you step through traces like a debugger, making it easy to understand complex agent behaviors and debug issues.
- π Non-blocking tracing: Minimal performance impact on your application
- π³ Hierarchical traces: Automatically captures parent-child relationships in nested function calls
- β‘ Thread & async safe: Works correctly in multi-threaded and async contexts
- πΎ Simple storage: Human-readable JSONL files, one per trace
- π¨ Interactive TUI: Step through traces event-by-event with a rich terminal interface
- π¦ Minimal dependencies: Core framework only requires Pydantic
- π Easy integration: Simple
@traceabledecorator to instrument your code
uv add local-tracerpip install local-tracergit clone https://github.com/jwgwalton/traceTTY.git
cd traceTTY
uv syncfrom local_tracer import traceable, get_client, tracing_context
@traceable(name="call_openai", run_type="llm")
def call_openai(prompt: str) -> str:
# Your LLM call here
return "response from AI"
@traceable(name="process_query", run_type="chain")
def process_query(query: str) -> str:
# Nested call - automatically linked as child
response = call_openai(f"Process: {query}")
return response
# Use it with a project context
with tracing_context(project_name="my_app"):
result = process_query("Hello world")
# Ensure all writes complete before exit
get_client().flush()This creates a trace file at traces/my_app/{trace_id}.jsonl.
After running your traced code, launch the interactive visualizer:
# Open trace browser
uv run trace-viewer
# Or open a specific trace file
uv run trace-viewer traces/my_app/abc123.jsonl
# Or specify a custom traces directory
uv run trace-viewer -d ./my_traces| Key | Action |
|---|---|
β / , |
Step backward through events |
β / . |
Step forward through events |
Space |
Toggle auto-play (replay trace automatically) |
Home |
Jump to start of trace |
End |
Jump to end of trace |
Enter |
Select trace or expand tree nodes |
Tab |
Cycle focus between panels |
b |
Back to trace browser |
r |
Refresh trace list |
? |
Show help |
q |
Quit |
The framework supports different run types to categorize operations:
llm- Language model callschain- Sequential operations or workflowstool- Tool/function executionsretriever- Document/data retrieval operationsembedding- Embedding generation
@traceable(
name="my_function", # Human-readable name (defaults to function name)
run_type="chain", # Type of operation (see above)
tags=["production", "v2"], # Optional tags for filtering
metadata={"version": "1.0"}, # Optional metadata dictionary
)
def my_function(arg1, arg2):
return resultControl tracing behavior with context managers:
from local_tracer import tracing_context
# Set project name for organizing traces
with tracing_context(project_name="my_project"):
process_query("test")
# Temporarily disable tracing (useful for hot paths)
with tracing_context(enabled=False):
# This won't be traced
expensive_operation()
# Combine options
with tracing_context(project_name="debug", enabled=True):
debug_run()For more control, use RunTree directly:
from local_tracer import RunTree
with RunTree(name="manual_trace", run_type="chain", inputs={"query": "test"}):
# Your code here
result = do_something()
# Outputs are automatically captured
# Or with explicit output setting
run = RunTree(name="custom", run_type="tool", inputs={"x": 1})
run.post() # Send creation event
try:
result = complex_operation()
run.end(outputs={"result": result})
except Exception as e:
run.end(error=str(e))from local_tracer import load_trace, reconstruct_trace_tree, print_trace_tree
# Load raw events from a trace file
records = load_trace("traces/my_app/abc123.jsonl")
# Reconstruct hierarchical structure
root_run = reconstruct_trace_tree(records)
# Pretty-print the trace tree
print_trace_tree(root_run)- Context Layer: Thread-safe and async-safe state management using Python's
contextvars - RunTree: Core data structure representing traced operations with automatic parent-child linking
- TracingClient: Non-blocking writes via background thread and queue
- JSONL Storage: One file per trace with append-only operations
- Visualizer: Event-based stepping through trace history
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β User Application β
β @traceable RunTree tracing_context β
ββββββββββΌβββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββ
β β β
βΌ βΌ βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Context Layer (ContextVars) β
ββββββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββ
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β TracingClient (Background Thread + Queue) β
ββββββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββ
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β File System (JSONL in traces/ directory) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
For detailed architecture documentation, see:
- Architecture Overview - Core framework design
- Visualizer Design - TUI implementation details
The examples/ directory contains complete working examples:
Shows simple tracing with nested function calls:
# With simulated LLM (no API key needed)
uv run python examples/basic_llm_call.py
# With real OpenAI API
OPENAI_API_KEY=sk-... uv run python examples/basic_llm_call.py --use-openaiDemonstrates tracing a ReAct-style agent with tools and conditional routing:
uv run python examples/langgraph_agent.pySee examples/README.md for more details.
Run the test suite:
# Run all tests
uv run pytest
# Run with coverage
uv run pytest --cov=local_tracer --cov-report=html
# Run specific test file
uv run pytest tests/test_decorator.py# Clone the repository
git clone https://github.com/jwgwalton/traceTTY.git
cd traceTTY
# Install dependencies with dev extras
uv sync
# Run tests
uv run pytesttraceTTY/
βββ local_tracer/ # Core framework
β βββ __init__.py # Public API exports
β βββ client.py # Background writer and queue
β βββ context.py # ContextVar management
β βββ decorator.py # @traceable implementation
β βββ reader.py # Trace loading utilities
β βββ run_tree.py # Core RunTree class
β βββ schemas.py # Pydantic models
β βββ utils.py # Helper functions
β βββ visualizer/ # TUI application
β βββ app.py # Main Textual app
β βββ models.py # State management
β βββ screens/ # UI screens
β βββ widgets/ # UI components
βββ examples/ # Example scripts
βββ tests/ # Test suite
βββ docs/ # Documentation
βββ pyproject.toml # Project configuration
Contributions are welcome! Please feel free to submit issues or pull requests.
- Follow existing code style
- Add tests for new features
- Update documentation as needed
- Run tests before submitting PRs
This project is open source. Check the repository for license details.
- Inspired by LangSmith - LangChain's tracing platform
- Built with Textual - Modern Python TUI framework
- Uses Pydantic for data validation
For questions or feedback, please open an issue on GitHub.
Made with β€οΈ for better LLM debugging