Skip to content

jwgwalton/traceTTY

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

14 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

traceTTY

A lightweight, local-first tracing framework for LLM applications with an interactive terminal visualizer. Inspired by LangSmith but simplified for local development and debugging.

🎯 Overview

traceTTY captures execution traces (inputs, outputs, timing, errors) from your LLM applications and stores them as human-readable JSONL files. The included terminal-based visualizer lets you step through traces like a debugger, making it easy to understand complex agent behaviors and debug issues.

Key Features

  • πŸš€ Non-blocking tracing: Minimal performance impact on your application
  • 🌳 Hierarchical traces: Automatically captures parent-child relationships in nested function calls
  • ⚑ Thread & async safe: Works correctly in multi-threaded and async contexts
  • πŸ’Ύ Simple storage: Human-readable JSONL files, one per trace
  • 🎨 Interactive TUI: Step through traces event-by-event with a rich terminal interface
  • πŸ“¦ Minimal dependencies: Core framework only requires Pydantic
  • πŸ”Œ Easy integration: Simple @traceable decorator to instrument your code

πŸ“¦ Installation

Using uv (recommended)

uv add local-tracer

Using pip

pip install local-tracer

For development

git clone https://github.com/jwgwalton/traceTTY.git
cd traceTTY
uv sync

πŸš€ Quick Start

1. Basic Usage

from local_tracer import traceable, get_client, tracing_context

@traceable(name="call_openai", run_type="llm")
def call_openai(prompt: str) -> str:
    # Your LLM call here
    return "response from AI"

@traceable(name="process_query", run_type="chain")
def process_query(query: str) -> str:
    # Nested call - automatically linked as child
    response = call_openai(f"Process: {query}")
    return response

# Use it with a project context
with tracing_context(project_name="my_app"):
    result = process_query("Hello world")

# Ensure all writes complete before exit
get_client().flush()

This creates a trace file at traces/my_app/{trace_id}.jsonl.

2. View Traces with the Visualizer

After running your traced code, launch the interactive visualizer:

# Open trace browser
uv run trace-viewer

# Or open a specific trace file
uv run trace-viewer traces/my_app/abc123.jsonl

# Or specify a custom traces directory
uv run trace-viewer -d ./my_traces

Visualizer Key Bindings

Key Action
← / , Step backward through events
β†’ / . Step forward through events
Space Toggle auto-play (replay trace automatically)
Home Jump to start of trace
End Jump to end of trace
Enter Select trace or expand tree nodes
Tab Cycle focus between panels
b Back to trace browser
r Refresh trace list
? Show help
q Quit

πŸ“š Documentation

Trace Types

The framework supports different run types to categorize operations:

  • llm - Language model calls
  • chain - Sequential operations or workflows
  • tool - Tool/function executions
  • retriever - Document/data retrieval operations
  • embedding - Embedding generation

Decorator Options

@traceable(
    name="my_function",           # Human-readable name (defaults to function name)
    run_type="chain",              # Type of operation (see above)
    tags=["production", "v2"],     # Optional tags for filtering
    metadata={"version": "1.0"},   # Optional metadata dictionary
)
def my_function(arg1, arg2):
    return result

Context Management

Control tracing behavior with context managers:

from local_tracer import tracing_context

# Set project name for organizing traces
with tracing_context(project_name="my_project"):
    process_query("test")

# Temporarily disable tracing (useful for hot paths)
with tracing_context(enabled=False):
    # This won't be traced
    expensive_operation()

# Combine options
with tracing_context(project_name="debug", enabled=True):
    debug_run()

Manual Tracing with RunTree

For more control, use RunTree directly:

from local_tracer import RunTree

with RunTree(name="manual_trace", run_type="chain", inputs={"query": "test"}):
    # Your code here
    result = do_something()
    # Outputs are automatically captured

# Or with explicit output setting
run = RunTree(name="custom", run_type="tool", inputs={"x": 1})
run.post()  # Send creation event
try:
    result = complex_operation()
    run.end(outputs={"result": result})
except Exception as e:
    run.end(error=str(e))

Reading Traces Programmatically

from local_tracer import load_trace, reconstruct_trace_tree, print_trace_tree

# Load raw events from a trace file
records = load_trace("traces/my_app/abc123.jsonl")

# Reconstruct hierarchical structure
root_run = reconstruct_trace_tree(records)

# Pretty-print the trace tree
print_trace_tree(root_run)

πŸ—οΈ Architecture

How It Works

  1. Context Layer: Thread-safe and async-safe state management using Python's contextvars
  2. RunTree: Core data structure representing traced operations with automatic parent-child linking
  3. TracingClient: Non-blocking writes via background thread and queue
  4. JSONL Storage: One file per trace with append-only operations
  5. Visualizer: Event-based stepping through trace history
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                      User Application                            β”‚
β”‚   @traceable          RunTree              tracing_context       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚                  β”‚                       β”‚
         β–Ό                  β–Ό                       β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                      Context Layer (ContextVars)                 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚              TracingClient (Background Thread + Queue)           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚              File System (JSONL in traces/ directory)            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

For detailed architecture documentation, see:

πŸ“– Examples

The examples/ directory contains complete working examples:

Basic LLM Call

Shows simple tracing with nested function calls:

# With simulated LLM (no API key needed)
uv run python examples/basic_llm_call.py

# With real OpenAI API
OPENAI_API_KEY=sk-... uv run python examples/basic_llm_call.py --use-openai

LangGraph Agent

Demonstrates tracing a ReAct-style agent with tools and conditional routing:

uv run python examples/langgraph_agent.py

See examples/README.md for more details.

πŸ§ͺ Testing

Run the test suite:

# Run all tests
uv run pytest

# Run with coverage
uv run pytest --cov=local_tracer --cov-report=html

# Run specific test file
uv run pytest tests/test_decorator.py

πŸ› οΈ Development

Setup Development Environment

# Clone the repository
git clone https://github.com/jwgwalton/traceTTY.git
cd traceTTY

# Install dependencies with dev extras
uv sync

# Run tests
uv run pytest

Project Structure

traceTTY/
β”œβ”€β”€ local_tracer/           # Core framework
β”‚   β”œβ”€β”€ __init__.py        # Public API exports
β”‚   β”œβ”€β”€ client.py          # Background writer and queue
β”‚   β”œβ”€β”€ context.py         # ContextVar management
β”‚   β”œβ”€β”€ decorator.py       # @traceable implementation
β”‚   β”œβ”€β”€ reader.py          # Trace loading utilities
β”‚   β”œβ”€β”€ run_tree.py        # Core RunTree class
β”‚   β”œβ”€β”€ schemas.py         # Pydantic models
β”‚   β”œβ”€β”€ utils.py           # Helper functions
β”‚   └── visualizer/        # TUI application
β”‚       β”œβ”€β”€ app.py         # Main Textual app
β”‚       β”œβ”€β”€ models.py      # State management
β”‚       β”œβ”€β”€ screens/       # UI screens
β”‚       └── widgets/       # UI components
β”œβ”€β”€ examples/              # Example scripts
β”œβ”€β”€ tests/                 # Test suite
β”œβ”€β”€ docs/                  # Documentation
└── pyproject.toml         # Project configuration

🀝 Contributing

Contributions are welcome! Please feel free to submit issues or pull requests.

Guidelines

  1. Follow existing code style
  2. Add tests for new features
  3. Update documentation as needed
  4. Run tests before submitting PRs

πŸ“„ License

This project is open source. Check the repository for license details.

πŸ™ Acknowledgments

  • Inspired by LangSmith - LangChain's tracing platform
  • Built with Textual - Modern Python TUI framework
  • Uses Pydantic for data validation

πŸ“¬ Contact

For questions or feedback, please open an issue on GitHub.


Made with ❀️ for better LLM debugging

About

LLM application tracing & TUI visualisation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages