Skip to main content
When you use TraceRoot’s auto-instrumentation for OpenAI, Anthropic, or LangChain, token usage and cost are tracked automatically — no extra code needed.
import traceroot
from traceroot import Integration

traceroot.initialize(integrations=[Integration.OPENAI])

# All OpenAI calls are now automatically tracked — tokens, cost, model
client.chat.completions.create(...)

How It Works

  1. Auto-instrumentation — TraceRoot intercepts LLM calls from OpenAI, Anthropic, and LangChain and captures token usage from the API response automatically.
  2. Fallback estimation — When token counts aren’t in the response, TraceRoot estimates using tiktoken.
  3. Cost calculation — Token counts are multiplied by the model’s pricing from TraceRoot’s pricing table.

What Gets Tracked

Each LLM span captures:
MetricDescription
Input tokensTokens in the prompt/messages
Output tokensTokens in the completion/response
Total tokensSum of input and output
Cost (USD)Calculated from token usage and model pricing

Viewing Costs

In the dashboard, costs are visible at both the span and trace level:
  • Per span — see the cost of each individual LLM call
  • Per trace — see the total cost aggregated across all LLM calls in the trace

Manual Token Reporting

For custom or unsupported providers, report token usage manually:
from traceroot import observe, update_current_span

@observe(name="custom_llm", type="llm")
def call_custom_llm(prompt: str):
    response = my_llm.generate(prompt)

    update_current_span(
        model="my-custom-model",
        usage={
            "input_tokens": response.input_token_count,
            "output_tokens": response.output_token_count,
        },
    )

    return response.text