When you use TraceRoot’s auto-instrumentation for OpenAI, Anthropic, or LangChain, token usage and cost are tracked automatically — no extra code needed.Documentation Index
Fetch the complete documentation index at: https://traceroot.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
- Python
- TypeScript
How It Works
- Auto-instrumentation — TraceRoot intercepts LLM calls from OpenAI, Anthropic, and LangChain and captures token usage from the API response automatically.
- Fallback estimation — When token counts aren’t in the response, TraceRoot estimates using tiktoken.
- Cost calculation — Token counts are multiplied by the model’s pricing from TraceRoot’s pricing table.
What Gets Tracked
Each LLM span captures:| Metric | Description |
|---|---|
| Input tokens | Tokens in the prompt/messages |
| Output tokens | Tokens in the completion/response |
| Total tokens | Sum of input and output |
| Cost (USD) | Calculated from token usage and model pricing |
Viewing Costs
In the dashboard, costs are visible at both the span and trace level:- Per span — see the cost of each individual LLM call
- Per trace — see the total cost aggregated across all LLM calls in the trace
Manual Token Reporting
For custom or unsupported providers, report token usage manually:- Python
- TypeScript