Documentation Index
Fetch the complete documentation index at: https://traceroot.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Automatically capture agent steps, tool calls, and LLM invocations within LangChain chains and LangGraph graphs.
Setup
import traceroot
from traceroot import Integration
traceroot.initialize(integrations=[Integration.LANGCHAIN])
import * as lcCallbackManager from '@langchain/core/callbacks/manager';
import { TraceRoot } from '@traceroot-ai/traceroot';
TraceRoot.initialize({
instrumentModules: { langchain: lcCallbackManager },
});
Usage with LangGraph
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
llm = ChatOpenAI(model="gpt-4o")
tools = [search_tool, calculator_tool]
agent = create_react_agent(llm, tools=tools)
# All agent steps are automatically traced
result = agent.invoke({
"messages": [{"role": "user", "content": "What is 2 + 2?"}]
})
import { ChatOpenAI } from '@langchain/openai';
import { HumanMessage } from '@langchain/core/messages';
import { END, START, StateGraph, Annotation } from '@langchain/langgraph';
const llm = new ChatOpenAI({ model: 'gpt-4o', temperature: 0 });
const AgentState = Annotation.Root({
query: Annotation<string>({ reducer: (_, v) => v, default: () => '' }),
answer: Annotation<string>({ reducer: (_, v) => v, default: () => '' }),
});
// Build graph — all nodes are automatically traced
const app = new StateGraph(AgentState)
.addNode('answer', async (state) => {
const response = await llm.invoke([new HumanMessage(state.query)]);
return { answer: response.content as string };
})
.addEdge(START, 'answer')
.addEdge('answer', END)
.compile();
const result = await app.invoke({ query: 'What is 2 + 2?' });
console.log(result.answer);
Usage with LangChain
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o")
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("user", "{input}"),
])
chain = prompt | llm
# Chain execution is automatically traced
result = chain.invoke({"input": "Hello!"})
import { ChatOpenAI } from '@langchain/openai';
import { ChatPromptTemplate } from '@langchain/core/prompts';
const llm = new ChatOpenAI({ model: 'gpt-4o' });
const prompt = ChatPromptTemplate.fromMessages([
['system', 'You are a helpful assistant.'],
['user', '{input}'],
]);
const chain = prompt.pipe(llm);
// Chain execution is automatically traced
const result = await chain.invoke({ input: 'Hello!' });
What Gets Captured
| Attribute | Description |
|---|
| Agent steps | Each agent iteration as a span |
| Tool calls | Each tool invocation with input/output |
| LLM calls | All LLM calls within the chain/graph |
| Graph structure | Parent-child relationships between steps |
| Token usage | Aggregated across all LLM calls |
| Cost | Total cost for the full chain/graph execution |
Run the example
Clone the repo and run a complete agent end-to-end.
Python
Run the Python example
TypeScript
Run the TypeScript example