Lightweight LLM tracing SDK for Python with remote tool invocation.
pip install lightracefrom lightrace import Lightrace, trace
lt = Lightrace(
public_key="pk-lt-demo",
secret_key="sk-lt-demo",
host="http://localhost:3002",
)
# Root trace
@trace()
def run_agent(query: str):
return search(query)
# Span
@trace(type="span")
def search(query: str) -> list:
return ["result1", "result2"]
# Generation (LLM call)
@trace(type="generation", model="gpt-4o")
def generate(prompt: str) -> str:
return "LLM response"
# Tool — remotely invocable from the Lightrace UI
@trace(type="tool")
def weather_lookup(city: str) -> dict:
return {"temp": 72, "unit": "F"}
# Tool — traced but NOT remotely invocable
@trace(type="tool", invoke=False)
def read_file(path: str) -> str:
return open(path).read()
run_agent("hello")
lt.flush()
lt.shutdown()@trace() # Root trace
@trace(type="span") # Span observation
@trace(type="generation", model="gpt-4o") # LLM generation
@trace(type="tool") # Tool (remotely invocable)
@trace(type="tool", invoke=False) # Tool (trace only)| Parameter | Type | Default | Description |
|---|---|---|---|
type |
str |
None |
"span", "generation", "tool", "chain", "event" |
name |
str |
None |
Override name (defaults to function name) |
invoke |
bool |
True |
For type="tool": register for remote invocation |
model |
str |
None |
For type="generation": LLM model name |
metadata |
dict |
None |
Static metadata attached to every call |
Lightrace server also accepts traces from Langfuse Python/JS SDKs.
uv sync --extra dev
uv run pre-commit install
uv run pytest -s -v tests/
uv run ruff check .
uv run mypy src/lightraceMIT