Skip to main content
Once you have initialized TraceRoot, LLM calls are traced automatically. To go further — tracing custom functions, identifying users, grouping sessions, or tracking cost — you can use @observe and attach rich context to every trace.

Quickstart

Here’s everything you can attach in one place:
import traceroot
from traceroot import Integration, observe, using_attributes
from openai import OpenAI

traceroot.initialize(integrations=[Integration.OPENAI])

client = OpenAI()

@observe(name="run_agent", type="agent", metadata={"version": "v2"})
def run_agent(query: str) -> str:
    response = client.chat.completions.create(...)  # tokens + cost tracked automatically
    return response.choices[0].message.content

# Use a context manager to propagate user/session context across multiple calls
with using_attributes(user_id="user-42", session_id="chat-abc", tags=["production"]):
    run_agent("What's the weather?")
    run_agent("What about tomorrow?")

Quick Reference

I want to…AttributeGuide
Identify which user triggered a traceuser_idUsers
Group traces from the same conversationsession_idSessions
Label traces for filteringtagsTags
Attach structured key-value contextmetadataMetadata
Track token usage and LLM costauto via instrumentationCost Tracking
Link traces back to source code + commitauto via @observeGit Context